The Second Coming of UK Industrial Strategy

Industrial strategy, as a strand of economic management, was killed forever by the turn to market liberalism in the 1980s. At least, that’s how it seemed in the United Kingdom, where the government of Margaret Thatcher regarded industrial strategy as a central part of the failed post-war consensus that its mission was to overturn. The rhetoric was about uncompetitive industries producing poor-quality products, kept afloat by oceans of taxpayers’ cash. The British automobile industry was the leading exhibit, not at all implausibly, for those of us who remember those dreadful vehicles, perhaps most notoriously exemplified by the Austin Allegro.

Meanwhile, such things as the Anglo-French supersonic passenger aircraft Concorde and the Advanced Gas-cooled Reactor program (the flagship of the state-controlled and -owned civil nuclear industry) were subjected to serious academic critique and deemed technical successes but economic disasters. They exemplified, it was argued, the outcomes of technical overreach in the absence of market discipline. With these grim examples in mind, over the next three decades the British state consciously withdrew from direct sponsorship of technological innovation.

In this new consensus, which coincided with a rapid shift in the shape of the British economy away from manufacturing and toward services, technological innovation was to be left to the market. The role of the state was to support “basic science,” carried out largely in academic contexts. Rather than an industrial strategy, there was a science policy. This focused on the supply side—given a strong academic research base, a supply of trained people, and some support for technology transfer, good science, it was thought, would translate automatically into economic growth and prosperity.

And yet today, the term industrial strategy has once again become speakable. The current Conservative government has published a white paper—a major policy statement—on industrial strategy, and the opposition Labour Party presses it to go further and faster.

This new mood has been a while developing. It began with the 2007-8 financial crisis. The economic recovery following that crisis has been the slowest in a century; a decade on, with historically low productivity growth, stagnant wage growth, and no change to profound regional economic inequalities, coupled with souring politics and the dislocation of the United Kingdom’s withdrawal from the European Union, many people now sense that the UK economic model is broken.

Given this picture, several questions are worth asking. How did we get here? How have views about industrial strategy and science and innovation policy changed, and to what effect? Going forward, what might a modern UK industrial strategy look like? And what might other industrialized nations experiencing similar political and economic challenges learn from these experiences?

Changing views about industrial strategy and science policy have accompanied wider changes in political economy. The United Kingdom in 1979 was one of the most research-intensive economies in the world. A very significant industrial research and development (R&D) base, driven by conglomerates such as BAC (in aerospace), ICI (in chemicals and pharmaceuticals), and GEC (in electronics and electrical engineering), was accompanied by a major government commitment to strategic science.

In common with other developed nations at the time, the UK’s extensive infrastructure of state-run research establishments developed new defense technologies, as part of what the historian David Edgerton called the “warfare state.” Civil strategic science was not neglected either; nationalized industries such as the Central Electricity Generating Board and the General Post Office (later to become British Telecommunications) ran their own laboratories and research establishments in areas such as telecommunications and energy. The Atomic Energy Authority carried out both military and civil nuclear research.

Economists and policy-makers in the United Kingdom and the United States are increasingly recognizing that the effects of deindustrialization on regional economies have in the past been underestimated.

This situation was the product of a particular consensus established following the Second World War. From the left wing of the science-and-technology establishment there was a pre-war enthusiasm for central planning most coherently and vocally expressed by the Marxist crystallographer J. D. Bernal. From the right, there were the military engineers and capitalist chemists who built the Cold War state. From the left side of politics, there was the 1962 government of Harold Wilson proclaiming “the White Heat of Technology” as the mechanism by which the United Kingdom would be modernized. From the right, there was the determination, in the face of the UK’s relative geopolitical decline and economic difficulties, to remain a front-rank military power, with the accompanying decision to develop and maintain an independent nuclear weapons capability.

The ideological basis for an attack on this consensus was developing in the 1950s and 1960s. The leading figure here was Friedrich Hayek, an Austrian-British economist and philosopher and author of forceful critiques of the notion of central planning in general. His friend and intellectual ally, the chemist Michael Polanyi, adapted this argument specifically to oppose the case for planning and direction in science. Polanyi insisted on a strict division between pure science and applied science, introducing the idea of an independent “republic of science” that should remain free of any external direction. This idea was, and remains, very attractive to the world of elite academic science, though it is debatable whether this powerful myth ever described an actual, or indeed a desirable, situation.

Margaret Thatcher was the critical individual through whom these ideas became translated into policy. The influence of Hayek on Thatcher’s general thinking about economics and policy is well known. But Thatcher was also a scientist, whose practical experience was in the commercial world, as an industrial chemist. In a 2017 article in Notes and Records, the Royal Society journal of the history of science, the historian Jon Agar traced the influence of Thatcher’s own experience as a scientist on the evolution of science and innovation policy in her governments. In short, nothing in her experience, or in the experience of those who advised her, would persuade her that there was any special status for science that should exclude it from the market mechanisms to which she believed the whole economy should be subject.

Since the market turn, a key feature of science policy initiated by the Thatcher governments has been the decline of state-sponsored strategic science. By strategic science, I mean science that directly supports what the state regards as strategically important. The outstanding category here is of course the science directly motivated by defense needs. However, strategic science also includes science that supports the infrastructure of the market, for standards and regulation. It could also include science that supports environmental protection, communications infrastructure, medical advance, and the supply of energy.

The obvious point here is that the boundaries of what the state defines as strategic may change with time. Given that the Thatcher government had an explicit goal of shrinking the state, it is unsurprising that the state withdrew support for R&D in areas formerly thought of as strategic. The program of privatization took industries such as steel and telecommunications out of state control and left decisions about the appropriate degree of support for R&D to the market.

This had the largest effect in the area of energy. The privatized energy companies aimed to maximize returns from the assets they inherited, and levels of R&D fell dramatically. What had been a large-scale civil nuclear program was wound down. Even in the core area of defense, there was significant retrenchment, given extra impetus by the end of the Cold War. All but the most sensitive R&D capacity was privatized, most notably in the company Qinetiq. As the historian Agar has emphasized, none of this was an accident, but should all be considered part of a conscious policy of withdrawing state support from any near-market science.

The withdrawal of the UK state from much strategic R&D provided a test of the notion favored by some free market ideologues that state spending on R&D crowds out private-sector spending. In fact the reverse happened; the intensity of private-sector R&D investment fell in parallel with that of the state’s. The relationship between the two may not be straightforward, however, as the market turn in UK politics led to significant changes in the way companies were run. A new focus on maximizing shareholder value and an enthusiasm for merger and acquisition activity in the corporate sector resulted in the loss of industrial research capacity.

The fate of the chemicals conglomerate ICI provides a salutary example. A hostile takeover bid from the corporate raider James Hanson in 1991 prompted ICI to “demerge” by separating its bulk chemicals and plastics business from its pharmaceuticals and agrochemicals businesses. The company housing pharmaceutical and agrochemical operations—Zeneca—underwent further divestments and mergers to produce the pharmaceutical company AstraZeneca and the agrochemical company Syngenta. The residual rump of ICI, attempting to pivot toward higher-value specialty chemicals, made an ill-timed, debt-financed purchase of National Starch. A series of divestments failed to lift the debt burden, and what was left of the company was sold to the Dutch company Akzo-Nobel in 2007.

The story of the electronics and electrical engineering conglomerate GEC offers some parallels to the ICI story. In the 1990s, GEC sold its less exciting businesses in electrical engineering and electronics in order to make acquisitions in the booming telecom sector. Renamed Marconi, the company had to restructure after the bursting of the dot-com bubble, and finally collapsed in 2005.

The withdrawal of the UK state from much strategic R&D provided a test of the notion favored by some free market ideologues that state spending on R&D crowds out private-sector spending. In fact the reverse happened.

These corporate misadventures resulted in a loss of a significant amount of the UK’s private-sector R&D capacity across a wide range of areas of technology. The common factor was a belief that the route to corporate success was through corporate reorganization, mergers, acquisitions, and divestments rather than through researching and developing innovative new products. There are parallels here with the decline of long-term, strategic R&D in some big corporations in the United States, such as General Electric, AT&T Bell Laboratories, Xerox, Kodak, and IBM, though in the United Kingdom the loss of capacity was significantly greater and took place with no compensating new entrants at the scale, for example, of the US company Google.

It is also possible to interpret these stories as highlighting different beliefs about information and the power of markets. In the old industrial conglomerates such as ICI and GEC, long-term investments in R&D were made by managers and paid for by the retained profits of the existing businesses (which for companies such as GEC were substantially boosted by government defense contracts). A newer view emphasizes the role of the market as a more effective device for processing information; in this view, money locked up in the conglomerates would have been better returned to shareholders, who would have invested it in innovative, new companies.

There are arguments on both sides here. On one hand, questions can clearly be asked about the motivations and effectiveness of the managers of the conglomerates. They may seek to protect the incumbent position of existing technologies, they may be too reluctant to adopt new technologies developed outside their organization, and they may be inhibited by the scale and bureaucracy of their companies. On the other hand, one result of the turn to the markets has been a sequence of investment bubbles resulting in substantial misallocation of capital, together with a pervasive short-termism. Whatever the mechanisms at work, the outcome is not in doubt: a significant loss of private-sector R&D capacity in the United Kingdom since the Thatcher era.

The obverse of the ideological determination of Thatcher and her advisers to withdraw support from near-market research was a new valorization of “curiosity-driven” science. The result was a new, rather long-lasting consensus about the role and purpose of state-supported science that emphasized economic growth as its primary goal. But its tacit assumption was that innovation could be driven entirely from the supply side. In this view, the best way to make sure that state-supported science could contribute to a strong economy was by creating a strong underpinning of basic research, developing a supply of skilled people, and removing the frictions believed to inhibit knowledge transfer from the science base to the users of research.

The supply-side view of science policy was first clearly articulated in 1993, in a white paper introduced by the Conservative science minister William Waldegrave. This influential document halted a pattern of decline in research funding in the academic sector, using the classical market failure justification to call for the state to fund basic research. It reasserted the role of the private sector as the key funder of applied research, and with a continued program of privatization of government research establishments ensured a further withdrawal of the government from strategic research.

The advent of a Labour government in 1997 did not change matters. In line with the general acceptance of the post-Thatcher settlement, there was considerable policy continuity. A major policy statement in 2004, under the sponsorship of an influential and long-serving science minister, Lord Sainsbury, restated the principles of supply-side science policy.

The Sainsbury approach included new elements that reflected the changing corporate R&D landscape: more emphasis on spin-out companies based on protectable intellectual property and funded by venture capitalists, and on the aspiration to attract overseas investment. A sense that there was now too little private-sector research underpinned an explicit target for increasing business R&D over the next 10 years, to 1.7% of gross domestic product (a target that was conspicuously missed, as the figure currently stands at 1.1%).

The main practical effect of the 10-year investment framework was a series of real-term increases in spending on academic research. This was accompanied by a further run-down of strategic research, with R&D spending by government departments continuing to decrease.

Meanwhile, policy-makers displayed a growing sense that the academic research base, now benefitting from a more generous funding settlement, should be pressed harder to make sure it delivered economic growth. This expectation manifested itself in a heightened rhetoric about “impact,” with various bureaucratic measures to incentivize and reward activities that produced such economic effects, whether through the formation of spin-out companies or through collaboration with established businesses. These measures culminated in the 2014 Research Excellence Framework, which included impact as a criterion to be assessed in university research, and whose results directly determine university research funding.

The emphasis on impact produced the paradoxical effect that even as the overall balance in the UK’s research system in fact shifted from strategic research toward undirected research, many people in the academic part of the system felt that they were being pressured to make their own research more applied.

The industrial policy of the Conservative governments between 1979 and 1997 was to not have an industrial policy. The New Labour government of 1997 broadly accepted this consensus, in particular resisting so-called vertical industrial policy—that is, specific measures in support of particular industrial sectors.

Yet absolute opposition to industrial policy was at times also honored in the breach. The government’s policy of partial devolution to Scottish and Welsh assemblies gave an economic development function to these administrations and to agencies in the English regions. In 2007 an innovation agency—the Technology Strategy Board—was given free-standing status, empowered to award collaborative R&D grants to industry and to oversee some cross-sector networking activities, mostly between industrial partners.

But it took the global financial crisis of 2007-8 to bring about a change in mood. A new, powerful business minister in Gordon Brown’s Labour government, Peter Mandelson, emphasized the need to rebalance the economy away from the financial sector and toward manufacturing. The automobile sector was singled out for a series of interventions. Most strikingly, plans called for the government to form a new class of translational research centers, modeled on the successful and much-envied centers developed by the Fraunhofer Society, a major German research organization.

In 2010, the new Conservative-Liberal Democrat coalition government accepted the research center plan, continued the support for the automobile sector, and began to speak of industrial policy again. In practice, policy consisted of a mixture of sector-based support and the championing of selected technology areas, and it could be argued that many of the interventions were inadequate in scale. But perhaps the most important significance of this development was that after 30 years in which the very words industrial strategy were essentially unspeakable in the British state, there was now an acceptance, even in polite political circles, that support for industry was a proper role for government.

What does the innovation landscape in the United Kingdom now look like, after the dramatic shifts of the past three decades? The overall R&D intensity of the UK economy, which 30 years ago was among the highest in the world, is now low compared not only with traditional competitor economies, such as France, Germany, and the United States, but with the fast-growing economies of the far East, such as Korea and China.

Within the United Kingdom’s R&D enterprise, there is an academic science base that is very high performing when measured by academic metrics such as citations. But there are some notable problems on the industrial side. Uniquely for a developed economy of the UK’s size, more than half of industrial R&D is conducted by foreign-owned companies. This industrial R&D is concentrated in a few sectors, dominated by the pharmaceutical industry, with other major contributions in aerospace and computing. The biggest change in recent years has been seen in automobiles, where industrial R&D more than doubled since 2010, perhaps reflecting its status as the test-bed of the new wave of industrial strategy.

State-supported translational research is, with a very few exceptions, weak. The new Fraunhofer-inspired “Catapult Centres,” established post-2010, are finding their feet. Two of the most successful centers were built around preexisting initiatives, and they are worth considering in more detail as demonstrations of how new translational research capacity can be created. These are the Warwick Manufacturing Group (WMG) at the University of Warwick and the Advanced Manufacturing Research Centre (AMRC) at the University of Sheffield. Both are the creations of individual, highly entrepreneurial academics (Lord Kumar Bhattacharyya at WMG and Keith Ridgway at AMRC), and both began with a strong sector focus (automotive at WMG and aerospace at AMRC).

Although both institutions have grown out of conventional research universities and remain associated with them, their success arises from a mode of operation very different from university-based science, even in applied and technical subjects. AMRC began as a collaboration with the aircraft manufacturer Boeing, soon joined by the aero-engine manufacturer Rolls-Royce. Much of the research is focused on process optimization, and it is carried out at industrial scale so that new processes can rapidly be transferred into manufacturing production.

A key feature of such translational research centers is the way that the large companies that form their core partners—Boeing and Rolls-Royce in the case of AMRC, and Jaguar Land Rover for WMG—can bring in smaller companies that are part of, or aspire to be part of, their supply chains, involving them in joint research projects. Another way in which these translational research centers extend the mission of the traditional research university is through a greater involvement in skills development at all levels, including the technical skills typical of an engineering apprenticeship program. One measure of the success of the institutions is the degree to which they have been able to attract new investment in high-value manufacturing into what since the 1980s had been underperforming regions that had failed to adapt to successive waves of deindustrialization

Meanwhile, economists and policy-makers in the United Kingdom and the United States are increasingly recognizing that the effects of deindustrialization on regional economies have in the past been underestimated. For example, in a 2009 article in Harvard Business Review, Gary Pisano and Willy Shih, both professors of business administration, drew attention to the way in which manufacturing anchors what they called a “manufacturing commons,” the collective resources and knowledge that underpin a successful regional cluster.

These commons are based on the collective knowledge, much of it tacit, that drives innovations in both products and processes. A successful manufacturing commons is rooted in R&D facilities, networks of supplying companies, informal knowledge networks, and formal institutions for training and skills. Pisano and Shih’s key point is that the loss of a manufacturing plant, perhaps through outsourcing, can have a much greater impact than the direct economic impact of the loss of the plant’s jobs, by eroding this larger manufacturing commons.

But stories such as those of the Sheffield Advanced Manufacturing Research Centre suggest that manufacturing commons can be rebuilt. The emerging formula brings together several elements. Research facilities need to have an avowedly translational focus, and they should create strong research partnerships between or among academia, large companies already operating at the technological frontier, and smaller companies wishing to improve their innovation practices, possibly to make them more competitive as suppliers to the large companies. Education institutions need to focus on building skills at all levels. They should be linked with these research centers, creating clear pathways for individuals to progress from intermediate-level technical skills to the highest-level qualifications in technology and management. As these research facilities become successful and recognized, this should lead to a virtuous circle in which further inward investment is attracted and the existing business base grows in capability.

The past decade has seen a new consensus about industrial strategy emerge in the United Kingdom, to this extent at least: the Conservative government has a department with industrial strategy in its title (the Department of Business, Energy and Industrial Strategy) and has published a major policy document on the subject, and the opposition Labour Party advocates an industrial strategy as a major plank of its alternative economic policy.

To what extent is a consensus emerging on the substance of what an industrial strategy looks like? One attempt to articulate a new consensus has recently been made by the Industrial Strategy Commission, an independent initiative supported by the Universities of Sheffield and Manchester, of which I was a member.

In the commission’s view, the beginning of a strategy needs to recognize some of the real weaknesses of the UK economy now. One key issue that has become particularly pressing since the global financial crisis is the very low rate of domestic productivity growth. There is a global context here, in that productivity growth throughout the developed countries has been slowing since the 1980s. But the situation in the United Kingdom is particularly bad: levels of productivity were already significantly below those achieved in the United States, France, and Germany, and the slowdown since the global financial crisis has been dramatic.

The United Kingdom needs to move beyond the supply-side science policy that has dominated innovation thinking for the past three decades. More attention needs to be paid to generating demand for innovation.

The United Kingdom also has gross geographic disparities in economic performance, with an economy dominated by a single city, London. The UK’s second-tier cities underperform, there are many very poor urban areas that have not recovered from 1980s deindustrialization (analogous to the US Rust Belt), and many places in the rural and coastal peripheries have been left behind by economic success elsewhere.

As the commission sees it, an industrial strategy should be framed with a view of the whole economy, not just a few high-technology sectors. It needs to recognize the importance of the state as an actor uniquely able to coordinate activities and create new markets. And if it is to have a long life, the strategy needs to be linked to the broader long-term strategic goals of the state.

One positive aspect of the 1980s turn to free market liberalism has been an increased recognition of the importance of competition in driving innovation. But the wave of privatization that occurred has produced a set of industries (in transport, energy, and water, for example) that are heavily regulated by the state, but whose structure and incentives do not seem to reward new investment or innovation. This needs to be rethought.

The United Kingdom has underinvested in infrastructure for many years. For traditional hard infrastructure—roads and railways—the investment criteria used to assess new investments have rewarded parts of the country where the economy is already strong, and this must change. Of equal importance, investment needs to include the infrastructures underlying newer parts of the economy, such as mobile telephony and fast broadband internet coverage. Nor should the soft infrastructure that makes successful industrial societies function be neglected—in education and health, for example. The commission’s headline recommendation here is for a Universal Basic Infrastructure guarantee to ensure that all parts of the country have in place the infrastructure needed to make economic success possible.

Policy-makers across the political spectrum now seem to realize that the R&D intensity of the UK economy needs to increase. But this needs to be done in a way that considers the whole landscape: public and private-sector, undirected, use-inspired, translational, and strategic. More emphasis is required on the translational part of the picture than we’ve seen before, and the links to skills at all levels need to be made more coherent. Currently the geographical distribution of R&D, in public and private sectors alike, is highly imbalanced, with the biggest investments being made in the most prosperous parts of the country: London and the South-East. This too needs to change; if new R&D institutions are to be set up, the role they can have in catalyzing regional economic growth needs to be explicitly considered when decisions are made on their location.

Above all, the United Kingdom needs to move beyond the supply-side science policy that has dominated innovation thinking for the past three decades. More attention needs to be paid to generating demand for innovation. Here the government can have a central role, buy using its spending power much more purposefully to encourage innovation in the private sector, especially when linked to the strategic goals of the state. In the UK’s case, these include a long-term commitment to reducing the carbon intensity of the energy economy while maintaining the security and affordability of energy to domestic consumers and industry. The United Kingdom also maintains a wide, cross-party consensus in support of universal health care coverage. These goals are unlikely to be deliverable without substantial innovation. Done right, industrial strategy should enable the state to meet its strategic goals while at the same time providing the new business opportunities for the private sector.

In the post-war years, the United Kingdom, like other developed countries, had a warfare state, which did successfully drive innovation. The innovation system associated with the warfare state was dismantled, and what has arisen in its place has not been sufficient to drive economic growth or to meet the long-term challenges UK society faces. This, too, seems to be a difficulty shared by the United States and other industrialized nations.

We should not be nostalgic for the Cold War, but the United Kingdom does now need to rebuild an innovation system appropriate for its current challenges. Rather than attempting to re-create the military-industrial complex of the past, we should aspire to a social-industrial complex that can drive the innovation that is needed to create a sustainable, effective, and humane health and social care system and to place the energy economy on a sustainable, low-carbon footing.

Scaling Up Policy Innovations in the Federal Government: Lessons from the Trenches

Large bureaucracies such as those of the federal government are notoriously slow to innovate. But in recent years, new technology-enabled approaches to helping government meet its public obligations have begun to find a foothold in bureaucratic culture. Many of these approaches rely on what is called “open innovation,” which means, in essence, that in today’s era of distributed knowledge, an organization should look both within and to external sources for ideas and should involve both its own personnel and outside parties and communities in creative efforts. In practice, open-innovation approaches such as incentive prizes and crowdsourcing are proving to be increasingly effective for achieving policy objectives across a variety of government agencies and programs. Consider these examples:

Such trailblazing federal projects are demonstrating how open-innovation approaches can improve the government’s capacity to deliver high-impact results across a diverse range of policy problems. Open-innovation approaches to problem solving have been in use for hundreds of years on a smaller scale by various national governments (Napoleon offered a cash prize in 1795 that led to the invention of canned food), nongovernment organizations, private companies, and individual researchers. So why are they now beginning to be scaled up in the US federal government? Part of the explanation is that new technology platforms are enabling projects to be set up more quickly and reach more people faster. But these projects don’t just design themselves. All of them were championed by innovators within the government—and being an innovator in government is hard. It takes persistence, stamina, and strategy to overcome what can often seem to be insurmountable organizational, legal, and cultural barriers to implementation. Any would-be government innovator knows that ideas that threaten the status quo often carry with them high professional risk. As Tom Kalil, my former boss at the White House Office of Science and Technology Policy (OSTP), has observed, each new project can feel as if it requires “hand-to-hand combat” to pull off.

Different policy innovations may follow very different pathways to implementation and encounter very different obstacles and opportunities along the way. For example, government agency scale-up of incentive prizes and challenges had to clear a daunting set of hurdles. A number of these stand out in particular. Let’s examine the timeline of actions that occurred to overcome them.

First, there have been a series of external assessments conducted, starting as far back as 1999 by the National Academies, the Congressional Research Service, the Government Accountability Office, and consulting firms such as McKinsey & Company. Next came early authorization by Congress of pilot prize programs, initially at the Department of Defense’s Defense Advanced Research Projects Agency in 1999 and then at the National Aeronautics and Space Administration (NASA) and the Department of Energy in 2005. Then the White House demonstrated high-level support for prizes and challenges through the Strategy for American Innovation and the Open Government Directive, both issued in 2009, and through specific policy guidance provided in 2010 by the Office of Management and Budget (OMB).

Following on, OSTP convened in 2010 an informal community of practice that would later be led by the General Services Administration (GSA). Congress granted explicit government-wide prize authority through the America COMPETES Reauthorization Act in 2010. Various groups started to develop common program infrastructure (through the free online platform challenge.gov and NASA’s fee-for-service Center of Excellence for Collaborative Innovation) and to develop processes to meet congressionally mandated reporting requirements. And finally, over several years various agencies and groups collected information about what had been learned about innovations such as prize implementation and posted “toolkits” online for others to use.

This journey to scale leads to the present day. During the time I served as the assistant director for open innovation at OSTP, the use of prizes as incentives to solve problems doubled, from 350 prizes prior to 2014 to nearly 700 when I left the office in May 2016.

Another open-innovation approach that I worked on at OSTP was citizen science and crowdsourcing. Whereas the government’s use of prizes scaled up mostly through a top-down process, citizen science and crowdsourcing were catalyzed by the unique passion and commitment of a grassroots community working outside of government, well before there was support at higher levels in government. In 2011, the Woodrow Wilson International Center for Scholars began hosting monthly roundtables on citizen science, crowdsourcing, and social media, connecting government with academic researchers. In 2012, a small number of federal employees and representatives of outside groups who had attended these roundtables convened at the first meeting of an informal Federal Community of Practice for Citizen Science and Crowdsourcing (conveniently shortened to CCS). This group would eventually grow to more than 350 members.

Starting in 2013, OSTP noticed the energy of the community and the effectiveness of the approaches, and the office began supporting these policy innovations through national strategies and plans, such as the second Open Government National Action Plan. Subsequently, the community partnered with OSTP in 2014-15 to develop a toolkit. These efforts catalyzed the formation in 2015 of a formal group of Agency Citizen Science and Crowdsourcing Coordinators; the development of centralized infrastructure at the GSA, including a project catalog developed in 2016 in collaboration with the Wilson Center that appears online at citizenscience.gov (which now lists more than 400 community citizen science and crowdsourcing initiatives); and passage in 2017 of explicit legal authority to pursue innovations through the America COMPETES reauthorization.

During my time at OSTP, as well as my years at several federal agencies and in the private sector as a management consultant to various federal agencies, I have struggled with bureaucratic obstacles to innovation again and again, while designing and implementing dozens of policy approaches, including, among others, incentive prizes, public dialogues, and “design thinking” education projects that take students through the five stages—empathize, define, ideate, prototype, and test—of design. Based on these 10-plus years of experience, I have identified eight lessons for program and project managers who want to expand and scale up innovative approaches to problem solving in government.

Legal and policy frameworks. Without a clear legal basis for a policy innovation, the road to implementation can be bumpy. Explicit legal authority is not necessarily required for an approach to be used, but it can be extremely helpful for scaling. For example, the federal government has offered prizes since the early 2000s. Early innovators figured out how to implement prizes under either existing legal authorities or previously passed laws that could be interpreted (on legal review) as applying to prizes. The March 2010 OMB policy memo summarized those existing legal authorities and helped empower other innovators who were trying to find a legal path to implementation. Having a clear summary and general interpretation to point to helps encourage new projects. The 2010 America COMPETES Reauthorization Act, providing all federal agencies broad and explicit authority to conduct prize competitions, set the stage for rapid expansion of prize programs.

Shared infrastructure and common platforms. Programs provided by the GSA have been critical in scaling up many innovative efforts. These programs provide a focal point for federal efforts on an approach-by-approach basis. The website data.gov, launched in 2009, now lists over 170,000 open data sets. Upwards of one hundred agencies have used challenge.gov since its debut in September 2010, launching more than 740 prizes totaling over $250 million. These programs are more than just websites for listing data sets and prizes. They provide shared services and infrastructure free to agencies that, in turn, allow individual innovators to launch early pilot projects without having to develop all of the supporting online infrastructure and resources. Data.gov and challenge.gov also employ small teams of full-time federal employees to provide critical government-wide policy support, training, community of practice management, metrics, and public outreach for anyone in the federal community interested in launching an open data or prize initiative.

Emergence and sustainability of communities of practice. Being an innovator within government can be lonely, and connecting like-minded people to each other is critical not only to sustaining the energy of early adopters but also to attracting new converts. I’ve mentioned the CCS, the grassroots community that is open to all federal practitioners working on, funding, or just interested in learning more about crowdsourcing and citizen science. Other communities of practice for innovative policy have also emerged within the government, working actively in open government, prizes, open data, artificial intelligence, social media, and more. Some of these communities are chaired by agency leaders, and some are coordinated by the GSA. Some actively meet and provide training for members, whereas others act more as a list serve for sharing information and ideas. No matter the details, however, the role these communities play as social connectors can often prove critically important in scaling up policy innovations.

Knowledge capture and sharing. Over the years that I spent encouraging people to use prizes, I often wished that I had available a “Prizes for Dummies” book. Sharing knowledge is fundamental for success, and the process often requires numerous meetings. To aid in such efforts, the second Open Government National Action Plan, issued in 2013, committed the government to developing open innovation toolkits that document best practices, case studies, and relevant policy and law and provide step-by-step instructions for creating open-innovation programs. The first toolkit, for citizen science and crowdsourcing, was launched in September 2015. The second, for prizes, went live in October 2016. Both toolkits were developed by federal employees experienced with implementing these approaches.

Budgets. Policy innovations at the project level I’m concerned with here can only rarely be funded by specifically appropriated funds, and lack of dedicated programmatic funds is a recurring obstacle to scaling up new approaches. Sometimes finding resources means identifying appropriate pots of funds that can be leveraged through the annual federal budget process; other times it is necessary to persuade a program manager who controls funds to try something new. Both paths for securing new budgets are difficult, but the former especially requires sophistication and experience and works best if the aspiring innovator is strategically located in the White House, at OMB or another high-level policy council, or within some agency’s front office that is developing budget requests. Most federal employees are thus forced to rely on the second path for finding resources. Budgeting for innovative programs is made even more challenging by the annual budget planning process, which starts three years before funds are actually to be spent by the implementing agency. It takes patience and persistence not only to find resources but to maintain focus throughout the lengthy budget process. I saw colleagues “lose” resources after working hard to secure them up front because they didn’t continue to track and advocate for them throughout the entire multiyear budget cycle.

Agency processes. Standard protocols and processes for program management in federal agencies—in a word, bureaucracy—often represent huge barriers to scaling up policy innovations. Many innovative approaches to addressing policy needs require program and project managers to think fundamentally differently about what their problem is, who could possibly solve it, and what success would look like. At the program and project level, policy innovation may require a much greater focus on problem definition and user research than is needed when going through a typical contracting or grant-making approach. For example, the way many information technology contracts are written makes collaborative, iterative software development—agile software development, in Silicon Valley parlance—nearly impossible. The US Digital Service, a government team that uses technology and design to help a number of federal agencies deliver better services to the public, has confronted this barrier by providing comprehensive online support services—through the TechFAR Hub—aimed at correcting procurement misconceptions across the government.

Reporting requirements. Under the America COMPETES reauthorization, OSTP is required to report regularly to Congress on the use of incentive prizes. To gather this information, OSTP from 2010 to 2016 collected reports annually from each federal agency. (Starting in 2017, the reporting period is now every other year.) As a result, there are now available rich narratives and qualitative data sets for hundreds of prizes that not only explore the impact of each individual prize, but also enable the study of prize practices more generally to improve their use. These stories and data also show the public how the government is working to improve its services, use public funds wisely, and solve real problems.

External assessments and impact studies. Government officials and others looking to develop and implement policy innovations need to learn from earlier efforts. Thus, program leaders will need to regularly and rigorously assess how well their projects are working, to help in forming a data set of methods and impacts that can inform and improve future practice. External assessments of policy innovations can also help government managers make the case for continued or expanded funding and collaborative activities for scaling up successful approaches. But even as some policy innovations, such as citizen science, are already the focus of healthy interest from the academic community, other approaches, such as prize competitions, have not yet been subjected to significant academic scrutiny, despite the rich data sets available.

The strategies that I’ve described here for scaling up policy innovations have worked well for new open innovation approaches such as prizes, citizen science, and crowdsourcing. They also seem to be a key ingredient for scaling up other types of policy innovations, such as agile software development, user-centered design, and open data. And they could provide valuable guidance for adopting within the federal sectors some of the promising new institutional practices emerging outside of government.

Yet scaling up a policy innovation and moving it into the mainstream of practice are not the same things. If government tries to standardize best practices for policy innovation, it runs the risk of discouraging future innovation. For example, OMB circulars for grant management and its advice to agencies regarding federal acquisition regulations appropriate to some of the innovations I’ve discussed here have created a certain amount of caution and lack of creativity in the use of grants and contracts for other policy innovations across government. The government should try as much as possible to allow flexibility in how these and other innovative approaches are implemented. Facilitating policy innovation can help government be more responsive and effective. But efforts to standardize innovation processes in government may be counterproductive.

Philosopher’s Corner: Make Science Great Again

Watching the March for Science this past April could give people the feeling that they had traveled back to simpler times. One woman carried a sign that read, “I can’t believe I have to protest for reality.” Another sign read, “Progress in science = Progress for humanity.” It was a throwback to the 1950s, when statements such as “trust the experts” and “better living through chemistry” could be made without eliciting a knowing smile. All the language of postmodernism, where claims that objectivity are seen as masking power, or the recognition that science is multiple and experts often disagree, had melted away. Facts no longer concealed judgments, or failed to dictate the one “best” choice, and science was no longer entangled with technology, creating losers as well as winners.

The defenders of science today accuse President Trump, child of the postwar era, of propping up a sanitized version of the past. The United States was not so great then, they remind us. Remember leaded gasoline? Or McCarthyism? To say nothing of the racism and sexism. But might these defenders be doing the same thing, with a romanticized nostalgia for an ideal of science—and of its links to society—that never existed?

The allure of stability, of a firm metaphysical order, explains the nostalgia on both sides—those who voted for Trump and those protesting his attacks on science. Everything once solid is dissolving, and science-slash-technology turns out to be both refuge and culprit. It has created a global system that undermines traditions and communities as well as a media hall of mirrors in which our consciousness increasingly becomes episodic and distracted. Yet science also holds out the promise of terra firma, of truth and reality. Maybe both camps would find a common hero in the 1950s TV detective Joe Friday: just the facts, ma’am. Plainspoken, black-and-white.

Where does all of this leave the intellectual class—those who have insisted for so long that all is gray? They—we—have some soul-searching to do. After all, many of us have also been at the game of challenging “facts” by exposing their varied genealogies. We have waged our own deconstructive war on certainty. Now that a populist version of post-truth has arrived on the scene, should we switch sides and play the defenders of facts—after all we have done to those poor things?

Faith in the institutions that have traditionally policed the borders of truth—science, the media, and the university—is dropping precipitously. In the United States, the right wing is creating an alternative set of such institutions from Fox News to the Nongovernmental International Panel on Climate Change to Regent University. Creationists, anti-vaxxers, and even flat-Earthers are all over YouTube and beyond. The president thinks that climate change is a hoax, is dismantling scientific advisory bodies, and is not even appointing a presidential science advisor. Those who have spent their careers attacking government science and education agencies are now in charge of them. Tribal epistemologies are taking root, threatening to become tribal realities. Maybe it is time to drop the matches of criticism and pick up a fire hose of realism. Not all criticism is helpful, and not all realism need be naïve.

As the French philosopher Bruno Latour argued, this amounts to a shift in tactics as battlefield conditions change. Like good generals, we need to recognize that the threat no longer comes, as Latour commented, “from an excessive confidence in ideological arguments posturing as matters of fact … but from an excessive distrust of good matters of fact.” Doctors once had too much power over patients, and now perhaps (in the age of WebMD) they have too little. Experts once went under-questioned; now in the age of knee-jerk accusations of “fake news” they go over-questioned. We have left the age of patronizing and paternalistic authority and entered the age of paralyzing doubt and dangerous quackery.

Heraclitus once wrote that “Everything always has its opposite within itself.” Hegel gave this formulation a historical trajectory by tracing how each affirmation carries within itself a negation, which in turn becomes an affirmation to be negated in an ongoing dialectic. The deconstruction of scientific objectivity and authority is going through this cycle. Intellectuals need to be attuned to the shifting context of their work and cognizant of just who their weapons are serving. We need not only critical thinking but meta-critical thinking, which is to say thoughtfulness about the uses and abuses of criticality. This is something the ancients did better than the moderns, because they understood how some truths are too dangerous to be widely shared. We, however, insist on demonstrating just how “smart” we can be.

This suggests that the intellectual class should not continue the same old assault on “facts” today as they did yesterday. But they also cannot simply revert back to the very myths about science they have so long debunked. Unfortunately, this tactic animated much of the March for Science. For example, some of the organizers pointed out that science is the basis for “many useful technologies,” such as airplanes. Sure, airplanes have their upsides, but they also contribute to the climate problem that formed a key motivation for the march. When you defend something as large and manifold as science, you are bound to get caught in such contradictions. Were they marching for vaccines and bioweapons? Were they marching for clean coal technology and fracking and solar panels?

The march website proclaimed that “science is a process, not a product.” So, they were marching for a process. But that’s a little like rooting for the referee; or worse, it’s rooting for any outcome as long as it fell out the back end of the scientific method. The march website further stated that “science serves the interests of all humans, not just those in power.” But wouldn’t those who work with indigenous communities cringe at this—even though they also oppose Trump? And wouldn’t the same go for feminists and postcolonial scholars who have long documented science as a tool (or shall we say a process?) of oppression?

What we need is a more critical defense of science: progress in science criticism as well as in science. We might learn from the post-war responsible science movement, when scientists first engaged in protest and activism. When they marched, it was for specific policies, not science writ large and full stop. Similarly, during some of the first Earth Day marches, people were for some kinds of science and decidedly against other kinds. Rachel Carson was for biological pest controls and against many chemical pesticides. In short, things were more nuanced and explicitly linked to policy goals and the values and visions justifying them. They didn’t just chant, Facts R Us.

Of course, our challenge today is different, because it is not just about which science to promote but also about the status of science. We have to hold on to the contradictions, by affirming the interpretive richness of reality and the hardness of certain matters of fact. To quote the CNN commercial, sometimes an apple is an apple. Sometimes we should open black boxes, and sometimes we should close them and pronounce the controversy dead. Knowing when to do which—to debunk or “bunk,” to distrust or trust—is a matter of judgment more than epistemology or logic. You have to be equally suspicious of claims that reinforce and claims that challenge your worldview.

In the age of internet trolls, that kind of moral character is in short supply. And until we address this problem, no amount of facts will make an iota of difference. Everyone is fully armed with his or her own set of those. If I may be permitted my own nostalgia, it would be for the town hall meeting where people emerge from behind their screens and gather to talk about the way they understand and take up with the world. To borrow, as Latour does, from Heidegger: we might consider talking as seriously about matters of concern as we do about matters of fact. Silly, I know. But maybe no more so than protesting for reality.

Child Support in the Age of Complex Families

The American family doesn’t look like it used to—at least for the economically disadvantaged. Parents are often unmarried when their children are born. Unmarried couples’ unions are often unstable; they frequently split soon after the birth of a child and the partners often quickly form new relationships. From a child’s perspective, this means a daddy-go-round of successive father figures with varying levels of engagement and support. Along the way, both parents often have children with other partners, leading to complex families. As 42% of US children are now born outside of a marital bond, some observers have labeled the United States a “complex family society.”

To acquire a clearer picture of these families, Laura Tach, Sara McLanahan, and I turned to the Fragile Families and Child Wellbeing study, a longitudinal birth cohort survey that began in 2000 and is nationally representative of births in large cities. We limited our analysis to children born to unmarried parents, since most children of disadvantaged parents are now born outside of a marital bond. The results were stunning. By age five, nearly 80% of the children had experienced either instability or complexity in some form, and almost half had experienced both. (See Figure 1.)

These levels of instability and complexity are unprecedented in US history. Moreover, the United States is unique among rich nations in this regard. And the trends are concentrated among the disadvantaged. In fact, whereas nonmarital childbearing has risen dramatically among non-college-educated women, it continues to be relatively rare among those with college degrees, for whom divorce rates have been falling since the 1970s. In sum, whereas the family lives of those at the bottom of the educational distribution grow ever more unstable, the opposite trend characterizes those with a college degree. Although instability and complexity may be, in part, a consequence of inequality, complex families are also a cause of the diverging destinies of US children by socioeconomic status.

During the 2000s, the Bush administration labeled these trends a crisis and launched a campaign to promote marriage. It funded public information campaigns and launched randomized controlled trials of a variety of interventions aimed at improving the relationship skills of unmarried couples who were expecting a baby or had just given birth. The most successful of these programs did have some impact on instability. Oklahoma’s Family Expectations, for example, saw an encouraging increase in one measure of family stability three years after random assignment. Other interventions, however, registered no effect, perhaps because it proved so hard to interest couples in participating in the programs, so the “treatment” ended up being weak. Qualitative interviewers who followed couples in such programs over time noted that participants reported that they were valuable, so we shouldn’t dismiss them. Nonetheless, it is unlikely that such efforts alone will turn these trends around.

Given the reality that many children are now born into unstable and complex families, the challenge is to ensure that they nonetheless can thrive. The nation’s child support system solves a key practical problem in a complex family society such as ours: how to get resources from parents (usually fathers) to children who do not live with them. The system requires fathers to contribute financially when the parents’ relationship ends. Yet—at least implicitly—the current system is built on the widely shared assumption that these absent fathers feel no responsibility for their offspring and must therefore be forced to provide. But this assumption could be wrong, and if it is, then this heavy-handed approach to enforcing paternal responsibility could be blinding us to other approaches to engaging fathers in financially and emotionally supporting their children.

Let’s begin with what’s good about the current child support system. Though far from perfect, it has enormous reach. Its caseload covered one in five US children in 2016, and fully half of all poor children are served by the program. In 2013, child support payments for poor custodial parents who received them made up, on average, 49% of their annual personal income. For those custodial parents below poverty who were paid the full amount owed, child support accounted for over two-thirds (70.3%) of average annual personal income. (See Figure 2.)

The child support system has grown in importance during the past two decades as the federal government’s Temporary Assistance to Needy Families (TANF) program has withered as a result of welfare reform. By contrast, during this period the child support system has become more adept at collecting payments from noncustodial parents with an order. As a consequence, noncustodial parents who pay are contributing a much larger share to custodial parents’ total income. The result is that when poor custodial parents receive child support, both parents contribute more or less equally, and it’s the parents, not the taxpayers, who provide the lion’s share of support.

It behooves us, therefore, to explore how an evolving understanding of noncustodial fathers can be used to improve the child support system, which is such a vital source of support for an increasing number of US children who live apart from one of their parents.

New view of fathers

The key to making the child support system even more effective is to capitalize on fathers’ desires—evident in both survey and qualitative data—to be involved in their children’s lives. This evidence goes against the conventional wisdom that unmarried fathers selfishly flee the moment they hear of an impending pregnancy, act like boys instead of men, and leave the mother holding the proverbial diaper bag.

Let’s start at the time of a child’s birth. Data from the Fragile Families and Child Wellbeing survey show that the conventional wisdom is far from the truth. According to mothers’ reports (which are more conservative than fathers’), four out of five fathers contributed financially during her pregnancy, three-fourths had visited her in the hospital by the time she was interviewed by the survey team, and eight in ten said the father planned to continue to contribute financially. An astonishing 99.8% of fathers interviewed said they wanted to be involved in their children’s lives. And 93% of mothers agreed.

Over the past two decades, Timothy Nelson, Laura Lein, and I (along with several dozen graduate students) have conducted repeated in-depth interviews with 428 low-income noncustodial fathers of 753 children in four metro areas: Austin, Texas; Camden, New Jersey/Philadelphia, Pennsylvania; Charleston, South Carolina; and San Antonio, Texas. In 2013, Nelson and I published Doing the Best I Can, a book on the social meaning of fatherhood based on the Camden/Philadelphia interviews. Our book, along with the findings of several other in-depth interview studies, reveals that these men often identified fatherhood as a key source of meaning and identity in their lives.

We asked each father a question that proved particularly revealing: What would your life be like if you hadn’t had your children? Mike told us, “I’d probably be dead somewhere, or back in jail, in and out of rehabs…. It’s given me something to fight for, something like a destination. I got to be somewhere.” Apple said, “I guess after I got caught up in the bad life, as far as drugs and all, the kids helped me keep my head up, look forward…. Kids give you something to live for.” According to Alex: “I would be out getting high because I would not have [anything]. I would have my girlfriend, but my baby is the most important thing in my life right now.”

We asked Elvis, What did you think your future was going to be before you had him? “I wasn’t going to live past the age of 30,” he said. And then once you had him? we asked. “He came into the picture when I was like 27, and that all changed,” he answered. “Everything changed. My whole life changed.”

In our 2005 book, Promises I Can Keep, Maria Kefalas and I described 165 in-depth interviews with single mothers in Camden and Philadelphia, and we advanced the argument that a key reason so many poor women put motherhood before marriage was that while they aspired to marriage, they doubted a poor but happy marriage could survive. Yet they viewed motherhood as a key source of meaning and identity; children filled the void when other sources—such as a chance at a meaningful career—were limited. In Doing the Best I Can, Nelson and I reported on evidence that the same could be said for young men. So much for the “love ’em and leave ’em” narrative.

Another reason I advocate for greater attention to child support is the mounting body of evidence that it enhances child well-being. A meta-analysis by Paul Amato and Joan Gilbreth included 14 studies that measured the impact of child support on child well-being. Two subsequent studies have been conducted by Laura Argys and colleagues and Lenna Nepomnyaschy and coauthors. Taken together, this body of empirical research shows that child support, broadly conceived, has a positive impact on child well-being, including cognitive skills, emotional development, and educational attainment. We’ve already seen what a significant effect child support can have on a custodial mother’s income, but these analyses tie the receipt of child support directly to measures of child well-being. And some researchers find evidence that the positive impact of child support, broadly conceived, exceeds that of other sources of family income, hinting at a special salience for children.

Flaws in the system

In spite of all the value I find in the current child support system, I maintain that in some fundamental ways the system is broken. And there is evidence—both “hard” evidence from quantitative studies as well as more speculative qualitative data—to suggest that at least for some, the system may actually compromise child well-being.

To start with, a more fine-grained look at the very empirical studies I just referenced shows that there may be cause for concern. Those studies that separate out the various forms of support show that most, if not all, of the positive effect of child support on child well-being accrues to children whose parents have informal arrangements; that is, agreements that are developed by the parents rather than imposed by the court. More worrying, one carefully done study shows that formal child support receipt is associated with elevated child behavior problems.

Other hints that something may be wrong can be found in patterns of participation over time. Though the system has become more effective over the years in collecting on behalf of those with formal orders, it has become less effective at convincing custodial parents to participate at all. This is certainly not because the number of custodial parents is falling. Some of the fall-off in participation is no doubt due to the decline in the welfare rolls (welfare recipients are required to participate). But child support is a near-free service for anyone who wishes to apply. The declines appear to be due to the fact that custodial parents are less and less likely to see the need to engage with the system, prefer informal arrangements, or worry that their child’s father might not have the capacity to keep up with a formal award. Given the highly punitive nature of the current system, the costs to such men could be steep. Are custodial parents voting with their feet? Perhaps so.

Robert Doar, the Morgridge Fellow in Poverty Studies at the American Enterprise Institute, has offered one solution to this problem: require single mothers on the Supplemental Nutrition Assistance Program (SNAP) to sign up, just as welfare recipients are required to do. Doar recommends allowing for good cause exemptions and stipulating that the support would be disregarded in calculating SNAP benefits. But for programs that people value and are not invested in avoiding, behavioral science suggests that creating patterns of easy enrollment (defaults, nudges, and so on) can lead to greater participation—and satisfaction—than requirements or threats. Better results are often obtained by smoothing a process that could otherwise be demeaning or cumbersome. Plus, nudges don’t put the government in the position of denying needy children access to food.

But note my caveat in the prior paragraph: “for programs that people value and are not interested in avoiding.” We have to make sure that participants in the child support system find it valuable. And we have to make sure it actually promotes child well-being.

Rachel Butler, a PhD student at Johns Hopkins University, looked closely at the transcripts of the 429 interviews in our four-city study to identify comments related to formal, informal, and in-kind child support and to code them by theme. She took care to count only words or phrases that were “naturally occurring,” that is, introduced by the interviewee, not the interviewer. The results were striking.

The themes most closely associated with formal child support were legal terms, a loss of power and autonomy, and financial terms. Court is the most common word used. Jail is the next most common. We didn’t find frequent mentions of court surprising, since in most locales, orders for unmarried parents are often set by an administrative judge. Jail was surprising. Although every state has either civil or criminal statues on the books that permit incarceration for nonpayment, there have been no national estimates of how frequently the practice is used. We turned to Lenna Nepomnyaschy and her graduate student Sarah Gold, both experts on the child support modules in the Fragile Families survey, to see if they could construct such an estimate. Their analysis revealed that 12%-13%—roughly one out of eight—nonmarital children covered by a child support order had seen their father incarcerated for nonpayment of child support by the time they reached age nine.

No wonder that one father said his participation in the formal system “makes me feel like I need a judge to make me responsible!” Another said the system made him feel like “some kind of criminal.”

Next comes loss of power and autonomy, exemplified by phrases such as “they take it,” “jacking me up,” “they come looking for you,” “[makes me feel like I’m] not man enough,” “hounding me,” “she can whip me with that” [the threat of child support], “at their mercy,” and “crushing [my] manhood.”

Needless to say, these are not good testimonials for the program. The final theme is financial, with words and phrases such as “debt,” “impossible [financial demands],” “in the hole,” “they don’t credit me” [in reference to informal and in-kind contributions, which are not counted toward child support obligations in any jurisdiction], and “just another bill to pay.”

Sociologists who conduct analyses such as these often examine what isn’t present in narratives as well as what is. Notably absent are mentions of positive co-parent relationships, any mention of the father/child bond, or any sense that formal child support counts in fathers’ minds as a form of provision.

Language used to describe informal child support is strikingly different from that used in reference to formal support. Positive statements about the co-parenting relationship are particularly common. The word “agreement” is used most frequently. The other prevalent theme is positive expressions of power and authority—for example, the use of “give” rather than “take”.

Phrases indicating positive co-parenting include “she trusted me,” “we made an arrangement,” “we go together,” “do things together,” “sit down and discuss things,” “a bargain we struck,” “she can count on me,” “we split everything in half,” “worked it out directly,” and “worked it out like civilized people.” Positive expressions of power and autonomy include phrases such as “I take care of things,” “I give his mom some money,” “I send some money to my [little] girl,” “I put no limit on what I give,” “my first priority,” “I pay faithfully,” “I do more than others,” “I always make sure,” “it fills my responsibility,” and “I do for her.”

Before we move on, I must note that there is a potential dark side to feelings of power and autonomy when exercised inappropriately. Whereas we might find it encouraging when fathers feel empowered in general—and especially when they feel empowered to fulfill their roles as parents—most of us would find it worrying if fathers used informal contributions in coercive ways, lording it over their former partners. We do see examples of both in the data and note that one virtue of the formal system is that it allows the custodial parent to retain control.

Discussions of in-kind support also evoke distinctive language. In particular, they often include explicit references to the father/child bond. In another analysis of these data, Jennifer Kane, Timothy Nelson, and I find that disadvantaged fathers in particular prefer in-kind support because it serves “to repair, bolster, sustain, and ensure the future of their connections to their children.” We also find that “in-kind contributions can be repositories of paternal sentiment,” and that “a repository of memories—special times on which they hope their children can draw as a reminder that Dad really cares,” especially if they cannot be present in their children’s lives due to incarceration, maternal gatekeeping, or other reasons. In-kind contributions are often made in direct response to what children want or need. Phrases include “My daughter knows if she ask me, it’s there,” “I talk to her about what she needs,” “it’s more personal,” “I had to get it for her [when she told me how much she wanted it],” “I like to take them [shopping],” “I make sure my son got what he need,” and “I wanna show them the love [by buying them things].”

A second theme associated with in-kind support (also a prevalent, but not a predominant, theme in describing informal support) is provision. Representative phrases address meeting needs: “I put clothes on her back and shoes on her feet,” “I am in charge [of paying for Pampers and formula],” “whatever she wants,” “whatever he requires,” “anything I have,” “I take care of mine” or “I take care of my share,” “I try to provide anything he needs,” and “I want my kids to have the best.”

Child support relies, at least some degree, on the goodwill and cooperation of the fathers and mothers who are eligible to participate. The interviews we conducted leave no doubt that noncustodial fathers prefer informal arrangements for supporting their children. The analyses also hint at what might lie beneath the vexing finding that informal arrangements seem to contribute more to child well-being and that participation in the formal child support system could be producing unintentional corrosive effects. At least for the noncustodial parents we studied, formal child support is associated with an adversarial, dignity-stripping process that reduces any sense of provision to a debt, to “just another bill to pay.” Meanwhile, the formal system does little, if anything, to help parents build a positive co-parenting relationship or grant dignity to men who do pay. Furthermore, child support does nothing to build the father/child bond.

The question we need to confront is, what would it take to make child support a truly family-building institution? Focusing exclusively on securing economic resources for children—the worthy purpose of the formal child support system—is not enough to achieve that goal.

Before I turn to policy recommendations, allow me to propose some criteria for intervention. For the past several years, Laura Tach, Luke Shaefer, and I have been advocating for what we’ve called a dignity litmus test. We’ve encouraged policy-makers and practitioners to ask whether the intervention serves to offer participants a sense that they are valued members of the community or whether it generates stigma and shame. Along with economic resources, whatever policy solutions we adopt ought to lend dignity to participants. Other experts—including john powell of the Haas Institute for a Fair and Inclusive Society, and Ai-Jen Po of the National Domestic Worker’s Alliance—have added another dimension to our thinking, emphasizing the importance of power and autonomy. Thus, a holistic effort to promote mobility from poverty should be guided by these three principles: economic resources, power and autonomy, and respect in the community. (I’d like to note here that these principles have been embraced and endorsed by the U.S. Partnership on Mobility from Poverty, funded by the Bill and Melinda Gates Foundation and administered by the Urban Institute. Po, powell, and I are members of the partnership.)

Principle into practice

Let’s take each of these criteria in turn. To advance the goal of ensuring that kids receive sufficient economic resources, while also ensuring that fathers have sufficient resources to give and are motivated to do so, we need to make it feasible for every noncustodial parent, no matter how limited their circumstances, to participate. Until fairly recently, it could be argued that the formal child support system was, for some, actually diminishing motivation and capacity to pay due to unsustainable awards, a lack of adequate self-support reserves, incarceration without consideration of whether a father had the ability to pay, and the treatment of incarceration as a form of “voluntary unemployment”—which meant arrears continued to mount when fathers were in prison. However, since December 2016, thanks to the rule issued by the Obama administration’s Office of Child Support Enforcement, these practices are now discouraged or disallowed

But we need to go further. We need to find a way to engage fathers with very low incomes or very unstable employment. Mechanisms for real-time modification are needed. And—as I will suggest later—alternative kinds of support should count.

Equally important is to deal head-on with the difficulties that men with children by multiple partners face—a condition that almost inevitably leads to unsustainable awards. Consider the case of Texas, which employs a percentage-of-income formula (one of three formulas states use) to set child support awards. If a noncustodial father has three children by one partner, he will be expected to pay 30% of his net income in child support. But if those children are with three partners, his obligation will be roughly 50%. If fathers with multiple partners fall into arrears, which they’re probably likely to do, a state can withhold up to 65% of their wages if certain conditions apply. No state that I know of has figured out a way to deal with this dilemma. But as Figure 1 illustrates, a large number of fathers and children are in this situation. The goal here is not to let fathers off the hook, but to encourage more fathers to pay—and more mothers to sign up for the program—by making it feasible for every father to participate.

Second, how do we meet the goal of supporting children without stripping fathers of power and autonomy, as the formal system now does? I argue that we should make formal support more like informal support, which stokes positive feelings of power and autonomy. Minnesota offers an example of how this can be done. When serving as presiding judge of the Hennepin County Family Court, Bruce Peterson helped create and then went on to pilot the nation’s first Co-Parent Court. His docket was limited to the toughest cases, those where the father had denied paternity and paternity had to be established in order to set an award. (Those who voluntarily established paternity—the majority—had their awards adjudicated by an administrative judge.) Peterson’s innovation was to offer parents the opportunity to learn co-parenting skills by participating in four training session that custodial and noncustodial parents attended separately. Then, in a final meeting with a mediator, the parents came together and were guided in crafting a tailored parenting agreement they both felt was reasonable and served the best interests of the child. These agreements then went back to the court for approval. Strikingly, two-thirds of 709 parents involved completed the classes, a very high figure for any intervention. More than half completed cooperative parenting agreements together, and all but four did so with additional assistance from the courts.

Subsequent analysis showed that participation increased visitation. When asked about the total number of hours per month the father spent or visited with their child, intervention mothers reported an increase of 64.9 hours per month on average, whereas control mothers reported an average increase of 20.5 hours per month, a marginally significant result. Furthermore, there was a significant increase in mothers’ assessments of co-parenting quality; 63% of those in the Co-Parent Court intervention reported a positive change in the quality of the co-parenting relationship as opposed to 36% of controls. Though the program did not significantly boost child support payments, the fathers who completed the program paid significantly more of the total child support owed—86% versus 69%—than those in the control group. Clearly, more experimentation is needed, but these are encouraging signs.

Helping parents build co-parenting skills may be especially important given empirical findings cited earlier showing that the receipt of child support can actually have a corrosive effect on child well-being. We cannot be sure, but it is likely that the effect is due to the fact that noncustodial parents do not usually end up in the formal system unless, or until, co-parenting has broken down. Couple conflict, and not formal child support itself, is the likely cause of harm to the child. If we want to ensure that children will benefit from participation in the formal system, improving co-parenting skills should be a central goal. Plus, letting parents have a role in setting the parenting agreement should promote compliance.

In addition, joint custody should be considered unless either parent can demonstrate good cause. Currently, in nearly a third of the states, an unmarried mother is given sole custody even if the father establishes paternity. But among divorced parents, joint custody has grown rapidly in recent years, both legal (where parents share decision-making) and physical (where children live with both parents). There is no reason why these two groups—one more likely to be middle class and white, and the other disproportionately working class or poor and nonwhite—should be treated differently. Research shows that awarding joint legal custody to unmarried parents increases child support payments by an average of $170 per year

Furthermore, if our qualitative analysis is any guide, a re-envisioned child support system can enhance the father/child bond and build fathers’ own sense that they are “providing” by making it more like in-kind support, the form of support most closely compatible with these goals. If the custodial parent agrees, we should allow informal (cash given directly to the mother) but especially in-kind (such as payment of the child care bill, or the purchase of school supplies) contributions to be recognized. Currently these are usually excluded, labelled “gifts” rather than support. Changing this rule would solve one crisis of legitimacy the child support system faces in the eyes of the men we’ve interviewed—that no one holds the mothers accountable for how the money is spent. Fathers commonly voiced fears that their contributions would be siphoned off by the mother herself, or worse, by her new boyfriend. It was common for men to say this reduced their motivation, or even made them unwilling to pay.

Allowing in-kind contributions to count may be especially important for very low earners or those with very unstable work. For such fathers, services such as household or yard maintenance, not just goods, should be allowed to count as support if the mother agrees. About two-thirds of states already have an adjustment for parenting time in their child support guidelines, an idea we can build on. The goal is to have every noncustodial parent participate in a system that obligates them to contribute but also recognizes contributions in a variety of forms. Small contributions are often viewed in low-income neighborhoods as placeholders for larger contributions that men hope to make later on, when their circumstances improve.

Finally, we should remember that the original goal of the child support system when it was founded in 1975 was to reimburse the government for welfare costs. For clients on TANF (and sometimes Medicaid), this practice continues today. The government keeps all or most of the money (or in the case of Medicaid, medical support) provided by the father, rather than passing it along to the mother and children. This sends the wrong message to noncustodial fathers. Our qualitative data confirm that fathers frequently complain that, at least in their eyes, the state “takes its cut” before their money gets to their child. The negative valence of cost recoupment is another crisis of legitimacy the system faces, and we may well entice greater cooperation and higher participation if we ended the practice.

Let’s now turn to the final criteria, ensuring that every participant is treated as a valued member of the community. Although the formal system began to transition from a sole focus on recouping welfare costs to a “family focus” as early as the mid-1990s, the program continues to treat the noncustodial parents it serves as paychecks, not parents. The primary problem is the failure to set—and enforce—parenting time agreements in cases where the parents were never married and thus there was never a divorce. (Two states already do this, and a few others offer assistance in securing parenting time.) The fathers in our study sometimes refer to the stripping of what they see as their right to parenting time as “taxation without representation.”

The current two-tiered system of child support, one that offers rights to formerly married fathers while denying them (at least de facto) to the unmarried, must end. And it might well be very productive to do so. The director of San Francisco Child Support, Karen Roye, described to me a 2004 experiment in which the local community college, the City College of San Francisco, teamed with the child support system to help 100 families. They identified a number of women who wanted to take classes but couldn’t because of childcare responsibilities and lack of money due to the failure of the noncustodial fathers to pay child support. They negotiated visitation agreements with the dads under which the fathers would take care of the kids when the mothers were in class. It was a stunning success. All of the mothers graduated with a certification or associates degree. A significant number of the dads went on to earn degrees too. Child support was paid more reliably. Several of the couples even reunited.

Finally, treating noncustodial fathers as valued members of the community will require that we make investments in their capacity to pay. Federal child support funding cannot be devoted to such purposes at present. That should change. Other innovations, such as extending the Earned Income Tax Credit, the Child Tax Credit, or both, to fathers who pay child support have gained traction. These would not only reward those who pay through the formal system, but it would increase the incentive for men to work in the formal economy.

But let’s take the opportunity to go even further, given the complex family society in which we now live. We can’t return to the 1950s. We need to take families as they are. In our society, “co-parent” should be recognized as a key social role. Judge Peterson of Minnesota’s Hennepin County has suggested that we offer “enhanced recognition of parenting” in the legal systems that adjudicate child support that would value the variety of ways in which fathers can play a role in their children’s lives. This could even involve “commitment to parenting” ceremonies, where parents who share a child could invite families and friends to witness their joint commitment to co-parent. It’s not as wild an idea as it sounds. We all know—at least if we’ve had enough sociology courses—that social norms and institutions influence behavior. We should experiment with any number of approaches such as these to see if they work.

The key to progress is abandoning the false assumption that men are indifferent to their children. Then we can design policies that build on the felt connection to a child to help fathers overcome the barriers to playing a more constructive co-parenting role. Earning respect as parents could also motivate these men to seek better jobs and behave more responsibly in other aspects of their lives. They could become the valued members of the community that they aspire to be.

I will end with a story that illustrates how even small actions can make a big difference. One problem that Peterson faced as presiding judge of the Family Court was getting the putative noncustodial fathers to show up for hearings at all. He felt that the letter the court sent was needlessly pejorative and full of legalese. When the county adopted the Co-Parent Court as part of an innovative demonstration project, he had a chance to do something about it. Families in the project’s control group continued to get the standard letter. However, he made sure those in the intervention received a different kind of message. Though the fathers in both groups were ordered to appear, the Co-Parent Court letter eliminated some of the legalese, talked about providing services, and included a welcoming, colorful brochure with photos of fathers and children. It laid out, in simple language, the next steps. What happened? According to Peterson, “Twice as many people in the intervention group came to court because we didn’t [simply] call it a paternity hearing. We said that we’re going to talk about your family.”

TMI?

Because she has done justice to the morally challenging and politically explosive topic of the role of genetic testing in reproduction, the primary experience of reading Bonnie Rochman’s new book, The Gene Machine, is one of discomfort. If you want to consider the quest for “better” babies from wildly divergent, deeply personal points of view, there is probably no better book out there. Her excellent journalism skills are on full display here, and Rochman addresses head-on many of the most divisive issues in the field.

The central question of the book is how parents react to the onslaught of information they get when they do comprehensive genomic sequencing of their children. “Is genetic knowledge empowering or fear-inducing, or both?” Rochman asks. “Will it stress parents out or make them savvier?”

The Gene Machine is ultimately an ode to personal choice within the thicket of options coming our way. Unfortunately, Rochman avoids delving meaningfully into a discussion of how those choices can be formed or restricted. For example, economic realities may preclude emerging options for many women, and social norms about what kinds of children are desirable affect all of our imaginations. Particular communities can also face restrictions on their reproduction. For example, women who live in a state with very few abortion providers can lose the ability to make that choice; women who are pregnant in prison, who are struggling with addiction, or who receive welfare tend to experience greater policing and surveillance throughout their pregnancies. People familiar with these kinds of scenarios may be frustrated by the book’s minimal contextual discussion.

The book is at its best when exploring the personal and heart-wrenching stories of parents living with despair, fear, or regret from decisions they made or options they overlooked. Reading this book will certainly make you take a hard look at your own preferences and needs. As Rochman states, “This book is not about right or wrong answers, only extremely personal and intimate calculations.” However, she acknowledges, “The testing we elect and that which we forgo, and the choices we make based upon the results, has profound implications for what sort of society we want to be.”

Rochman focuses broadly on five subjects of genetic testing: members of couples prior to reproduction; embryos produced via in vitro fertilization; fetuses during pregnancy; newborns; and children with rare diseases. Each of these cases has slightly different ethical dimensions, but deals with many of the same themes: Can genetic sequencing impede someone’s right to an “open future”? Can it lead to stigma or discrimination if someone is labeled as having a particular trait or tendency? Should people be informed about incidental findings? And what do people need to know in order to make an informed choice?

Rochman is largely in favor of genetic testing in general, and preventative testing in particular. She discusses the San Francisco start-up Counsyl, which pushes for universal carrier screening as an attempt to prevent the transmission of diseases caused by single-gene mutations. As an example of the potential of that concept, she discusses the push within the Jewish community for prospective parents to test for Tay-Sachs disease. But, Rochman asks, wouldn’t it be better for everyone to know ahead of time, just in case?

She asserts that this knowledge would allow parents to be “in control,” but there is always a gamble. Prospective parents who learn they are at risk of a genetic disease may opt for in vitro fertilization combined with preimplantation genetic diagnosis, but this is often a long, expensive, and uncomfortable process that results in successful births only a portion of the time. Other alternatives, such as adoption or prenatal screening followed by the possibility of abortion, have their own hurdles. None of which is to say it is not better to know, but only to point out that the experience of control may be short-lived.

Genetic testing of fetuses during pregnancy opens up particularly sensitive issues, largely centered on the politics of abortion. To her credit, Rochman addresses this difficult topic head-on. She acknowledges nuances in the accuracy of tests for chromosomal abnormalities that might lead parents to choose to have an abortion, and the difficulties that this can cause. She notes that abortion services are becoming harder to access in some states. She addresses the decline in the number of babies born with Down syndrome, and a range of views about that fact from people who are intimately familiar with the condition. And she points out that with the barrage of testing coming our way in the near future, “A savvy consumer doesn’t automatically say yes.”

Nonetheless, she points out that parents use the Internet to try to understand and diagnose their children all the time. Parents meticulously track their babies’ sleeping and eating patterns; and pediatricians note a baby’s weight and height and compare them with population statistics. “We are accustomed in our tech-savvy world to accumulating data on our children,” Rochman states. Is genetic data any different?

In some ways, it’s not. For example, genetic data has all of the normal complications associated with other health data. These include security and privacy, which are hardly discussed in The Gene Machine, as well as issues relating to identity and the possibility of discrimination. Rochman makes the case that we should avoid genetic exceptionalism and learn what we can from other health fields that deal with sensitive information. But she is also uniquely interested in the bigger picture of what people do with their genetic information, and in the implications of having greater influence over the kids we have on the family-making process.

Rochman mentions the growing discussion of using new precision gene editing techniques, such as CRISPR, not merely to select a particular embryo during the in vitro fertilization process, but to modify one. However, Rochman also rightly points out that so-called designer babies already exist to a certain extent.

Anyone using a sperm or egg donor encounters menu-style choices about the kind of donor they want: hair color, eye color, weight, height, educational attainment, musical ability, and so on. The company GenePeeks will even “match” women’s DNA with that of men in a sperm bank in an attempt to prevent hundreds of possible diseases. These are arguably more precise methods of getting a child with a particular trait or propensity than genetic modification.

Moreover, the practice of preimplantation genetic diagnosis already puts parents in the position of being asked to choose one embryo over others. The basis of these decisions has sparked important debates in disability rights and justice communities. Should deaf parents be allowed to choose to have a deaf child? What does it mean to systematically assert that some conditions make life not worth living? Who decides and how do parents receive information about the range of possibilities? And finally, what should they do with that information?

This can become even more complicated when testing reveals a problem that is not well understood. Rochman shares the story of Maya Hewitt, who had genetic testing done for her son Daniel in an attempt to understand why he was experiencing hearing loss. The testing revealed that Daniel was missing a piece of his fifth chromosome, including a gene known as TERT. Mutations in that gene can lead to a host of complications including increased risk of cancer and bone marrow failure. However, there is scant evidence about Daniel’s case of simply missing the gene. Maya was told that Daniel might be affected, but that it couldn’t be predicted. This kind of inconclusive result might be the most challenging for parents to receive.

Maya Hewitt explains in the book, “Maybe I expect too much from science that they’re going to know what everything means. But our experience is that we are caught in a gray area—our report says ‘of unknown clinical significance.’ I wish we could stuff it back in the box and just live in ignorance. I wish I didn’t know.”

Based upon her extensive interviews with parents, Rochman suggests that there might be a useful distinction for thinking about genetic knowledge of existing children. She finds that for parents who perceive their child to be healthy, receiving negative or ambiguous information can be damaging. However, if they know their child is sick, they tend to want every snippet of information they can get. Rochman illustrates this point with the story of Shashi and David Goldstein’s daughter Cara, who was about to start chemotherapy just in case it helped her unknown condition. But hours before chemo was scheduled to begin, genetic sequencing results revealed that massive doses of a simple vitamin would be enough to prevent the deterioration of her body.

Rochman’s own preference is clear. A self-proclaimed “information junkie,” she opens the book with the famous Socrates quotation “The unexamined life is not worth living.” She is hardly alone. In a 2014 study of 514 new parents, 83% said they would want to have their baby’s genome sequenced. When there is an opportunity to know more, few of us seem able to resist.

In the final paragraph of the book, Rochman writes, “Technology is just a means to an end, a way to make—and keep—children healthy.” Although many parents may experience this to be true, it seems doubtful from a broader perspective. Technology is rarely just a means to an end, and not everyone necessarily envisions the same end point. Amidst the feel-good stories of how genetic sequencing is increasing biomedical knowledge, we should also pay attention to the insidious capability of technologies to not only provide a means but slowly and subtly shape the ends. Even with the relatively crude technologies at our disposal today, parents routinely use them to affect their children, and the kinds of children they have, beyond merely keeping them healthy. One needs only to peruse the traits considered desirable on an egg or sperm donor website to be reminded that there are forces at play that do not fit so comfortably within a medical framework. The question of how much and what kind of information we should know will never be straightforward, and for good reason.

Hiding in Plain Sight

Hiding the truth isn’t often praised. In an age of “fake news,” spam Twitter accounts, and sham Facebook groups, there is already plenty of concealment and disinformation to go around. But in Obfuscation: A User’s Guide for Privacy and Protest, Finn Brunton and Helen Nissenbaum, professors at New York University, make the case for hiding the truth. They argue that the average user of technology should obfuscate—that is, deliberately add ambiguous, confusing, or misleading information to interfere with surveillance and data collection. They aim to start a limited revolution of the informationally unempowered by offering tools to bolster privacy, to make things marginally harder for an adversary, or even just to protest data collection.

Part I of this small-format book introduces obfuscation’s key characteristics and variations through examples. The book isn’t intended to be a taxonomy of obfuscation, but the examples offered provide insight into a range of strategies. First up: paper and foil chaff deployed by warplanes to create decoy signals and confuse radar systems. Like chaff, one strategy for obfuscation is to produce fake signals to hide the real signal. The authors discuss how, after irregularities in the 2011 Russian parliamentary elections, Russian government allies used Twitter bots to disrupt online protests by flooding protest hashtags with pro-Russia or nonsense messages. Other examples are more in keeping with the book’s theme of obfuscation as the tool of the oppressed, not the oppressor. TrackMeNot, for example, is a tool for online obfuscation that Nissenbaum and others developed to disguise a user’s online search queries by automatically generating a flood of fake queries.

Other obfuscation strategies involve genuine but misleading signals, such as the identical bowler hats of thieving confederates in The Thomas Crown Affair. A technology-based example given in the book is the swapping of cellphone SIM cards by terrorist operatives to thwart metadata surveillance and, ultimately, drone strikes. More mundanely, some groups of consumers swap store loyalty cards to be able to get discounts while disguising their personal purchasing patterns.

Having familiarized readers with obfuscation as a concept, Part II argues for the value of the technique. It attempts to address why obfuscation is needed, whether obfuscation is ethically justified, and whether it can work. That’s some heady work for a mere 50 pages. With such high ambitions, it’s no surprise that the section only partly achieves its goals.

There are good reasons for keeping obfuscation in the privacy toolbox, but Brunton and Nissenbaum overstate their case. They characterize obfuscation as a guerilla tool especially suited for use by “the small players, the humble, the stuck, those not in a position to decline or opt out or exert control over our data emanations.” Thus, they allege that everyday people need obfuscation to deal with power and information disparities. I’m not convinced, for two reasons.

First, the powerful may in fact benefit more from using obfuscation tools to protect their personal privacy. To the extent we participate in society, we must exchange or transmit information we cannot control. But for the average person, it is his or her very obscurity that provides the bulk of privacy. The powerful lack obscurity. An elevator CCTV camera’s eye falls on the rich and the poor alike, but it matters more if you’re as famous as Jay-Z.

Indeed, because of the attention focused on them, the famous and powerful often find it particularly difficult to exert control over their data emanations—just think of tabloid photographs and investigative journalism. Recent headlines about sexual harassment also suggest that it is increasingly difficult for prominent people to conceal past misdeeds. If powerful people face more scrutiny, then obfuscation is more useful for them. And contrary to the authors’ argument, obfuscation employed as a tool of disruption may be most potent in the hands of those with significant resources. Consider the allegations of Russian interference in the 2016 US presidential election, with covert groups attempting to influence the outcome through disinformation campaigns. The book itself provides many other examples of the powerful using or benefiting from obfuscation: pro-government Twitter bots disrupting activist protests; car-sharing companies faking orders to rival services; a government agency massively over-producing litigation documents to slow a court case. Such uses of obfuscation by the powerful undermine the authors’ argument that it is a tool uniquely suited to balancing power disparities.

My second objection is that Brunton and Nissenbaum don’t effectively consider both the costs and benefits of obfuscation. They fail to differentiate between political and commercial uses of data, even though the pragmatic and ethical reasons for obfuscation are strongest in resisting political power and weaker in other cases. The authors describe the ordinary person in a large city as “living in a condition of permanent and total surveillance.” But if the goal is to reduce power imbalances, it matters who is doing the surveilling, and why. Netflix’s power over a consumer suspected of liking romantic comedies is quite different from the CIA’s power over a suspected terrorist overseas. Targeted advertising is not targeted killing.

Furthermore, the book only backhandedly recognizes the massive benefits of information sharing. The authors argue that practically speaking, we cannot opt out of the collection of our personal data, because “the costs of refusal are high and getting higher.” Another way of saying this is that the benefits of sharing are large and getting larger, and that they are now so enormous that anyone opting out would be making a lifestyle choice on par with the Amish or ascetic monks. The authors’ description of losing these benefits as the “costs of refusal” is their begrudging acknowledgement that consumers don’t live in a fantasyland—they must make trade-offs.

And the authors only hint at the potential additional cost of obfuscation techniques to those using them. Academics have begun to study data deserts: populations that are not substantially engaged in the data economy. By being underrepresented, these groups may be missing the social benefits from data use. Imagine, for example, the potential policy consequences of an entire ethnic group obfuscating their census forms. Obfuscation could create self-imposed data deserts.

Although the authors overstate the need for obfuscation and understate the costs, they more successfully make the case that it is a technique accessible to those with little visible power. They explain how obfuscation is an example of what the political scientist James C. Scott called “weapons of the weak,” in his 1987 book of that name. Scott studied how Malaysian peasants with little visible power still manage to engage in resistance through an accumulation of individual small acts of defiance. Obfuscation fits well into the category of tools that are incremental, subversive, and emergent.

So obfuscation is a potentially useful tool for which there is at least some need. But is it ethical? Brunton and Nissenbaum tackle four different objections to their obfuscation strategies: dishonesty, waste, free riding, and data pollution.

On dishonesty, the authors concede that obfuscation is lying, but argue that it may be justified if done for legitimate ends. I wish they had further explored whether all obfuscation is lying; intuitively, there appears to be a difference between generating false signals and creating genuine but misleading signals.

The charge that obfuscation wastes resources appears to have hit a nerve with the authors. Critics have argued that Nissenbaum’s TrackMeNot tool floods search engines with unnecessary searches, wasting shared broadband resources as well as privately owned search engine resources. The authors observe that waste is in the eye of the beholder. Certainly TrackMeNot users find the tool to be a worthwhile use of resources. But many of these resources are owned by others. The longstanding social and legal consensus is that a resource’s owner is the primary judge on whether a particular use of that resource is wasteful. Yet the authors treat this question as a yet-unsettled “political question about the exercise of power and privilege.” If justifying obfuscation requires rewriting US property rights, the authors have a long road ahead.

I am particularly puzzled by the discussion of free riding. In economics, the free riding problem states that if one can use a service without paying, the service will be undersupplied, making everyone worse off. In the most extreme case, if everyone free rides, the service may cease being available entirely—no one rides at all. Thus the key ethical question about obfuscation is whether it is ethical to take an action that if everyone else does the same, all will be worse off.

This is not the question the authors ask. They ignore the effect of obfuscation on the viability of the service and the potential indirect harm to consumers. Instead, they focus exclusively on whether obfuscation directly harms non-obfuscating users. By pejoratively framing service providers as predators and consumers as prey, they transform the free riding debate into a debate about whether obfuscators have a duty to rescue the non-obfuscators from their “ignorance and foolishness.” Their conclusion to this patronizing question? No, because it is the predatory service provider’s fault. Never mind that the service provider is offering a service from which the obfuscator continues to benefit.

The discussion on data pollution is more measured. Obfuscation could “pollute” collections of data that may have social benefit. For example, obfuscation could contaminate a public health database, diminishing the benefits of such data. The authors note that unlike environmental pollution, there are no clear social norms about data pollution for most data sets, and, as with environmental pollution, it may be justifiable to sacrifice data integrity for other values.

For each of these ethical objections to obfuscation the authors resolve a few simple cases, but in what they call the “vast middle ground” of cases, they kick the can to politics. More specifically, to the political philosophy of John Rawls, known for his egalitarian theory of “justice as fairness,” and his theoretical “veil of ignorance” tool. Rawls sought to justify structuring society to ensure both equality and liberty, but his work is often seen as emphasizing equality over economic liberty. In this vein, Brunton and Nissenbaum argue that property rights and other societal and legal precepts ought to be “open to political negotiation and adjustment” with the goal of achieving the best results for justice, general prosperity, and social welfare. This analysis is skeletal at best—Rawlsian political theory can justify many different variations of property rights, for example—and the book provides no path to reopening negotiation on these foundational precepts. Given the authors’ own characterization of obfuscation as a tool of the politically weak, tying the ethics of obfuscation to the outcome of political and social reforms is a decidedly unsatisfactory solution.

The book steps out of the murk of ethical and political philosophy onto more solid ground when answering whether obfuscation works. It is easy to agree with Brunton and Nissenbaum’s cautious conclusion that obfuscation is a helpful strategy to meet certain limited goals. The concluding chapter comes closest to fulfilling the titular promise of a “user guide.” It repackages content from earlier chapters to identify six goals one might seek to achieve through obfuscation, such as buying time to evade a threat or to express protest. The authors then suggest four preliminary questions that obfuscation project developers should ask. For example, is the project intended for individual use, or does it require collective use to be effective? Is it intended to be hidden or public? Is it intended to obfuscate for just a short time, or for a longer period? This section is useful and practical.

In sum, the book demonstrates that obfuscation can be a useful tool for self-defense against the many entities that collect data about us. The justification for obfuscation is strongest when addressing unwarranted government data collection, where the power disparities are greatest and users have few alternative tools. It is weaker regarding commercial collection, where there are enormous benefits to the consumer and where other mechanisms, including markets and regulation, already constrain harmful behavior. Ultimately, however, obfuscation is an imprecise and incomplete privacy tool because it focuses on the collection phase of the data life cycle. Consumer benefits and harms occur later in the cycle—in the use or misuse of data. Obfuscation can only indirectly hinder harmful misuses of data, and in doing so, may also hinder beneficial uses.

President Obama’s War on Coal? Some Historical Perspective

In Massachusetts v. EPA (2007), the Supreme Court ruled that greenhouse gases are pollutants under the provisions of the 1970 Clean Air Act. Subject to this ruling, the Environmental Protection Agency determined that greenhouse gas emissions endanger public health and the welfare of current and future generations. Working in response to EPA’s endangerment finding, the Obama administration developed a greenhouse gas regulatory program called the Clean Power Plan. Mandating time-honored tools such as investments in energy efficiency, deployment of renewable energy technologies, improvements in thermal efficiency of existing coal-fired power plants, and increased utilization of lower-emitting generating units, the Clean Power Plan aimed at reducing greenhouse gas emissions from the power sector by about one-third by 2030.

On October 9, 2017, EPA Administrator Scott Pruitt announced the agency’s decision to repeal the Clean Power Plan. “The war on coal is over,” Pruitt declared at an announcement event in Kentucky, going on to say, “The EPA and no federal agency should ever use its authority to say to you: we are going to declare war on any sector of our economy.” He added in conclusion, “the past administration was using every bit of power and authority to use EPA to pick winners and losers on how we generate electricity in this country. That is wrong.”

It is not uncommon for presidents and other senior officials to invoke the “war” metaphor in an effort to rouse a political base and galvanize support in the pursuit of policy objectives. For instance, Lyndon Johnson fought a war on poverty, Richard Nixon prosecuted a war on cancer, and Ronald Reagan waged a war on drugs. Some people may even recall J. Edgar Hoover’s war on crime. As a review of selected historical circumstances suggests, Pruitt’s use of the war metaphor to characterize the Clean Power Plan is at best misleading and in some ways downright inaccurate.

Like all economic sectors, the power industry can be affected by a variety of factors, including market trends, changes in consumer preference and public value, and implementation of regulatory actions intended to address voter concerns regarding public health, worker safety, economic competitiveness, or environmental protection. Over the past century changes in social value, market trends, and public policy have all affected the fortunes of the coal sector. As suggested by the timeline, many of these changes have been enabled by the free flow of ideas and advances in science and technology.

Timeline of "A Changing Environment for Coal"

Make America Great Again

The United States emerged from World War II as the dominant technology-driven economy in the world. For decades, virtually every major technology was developed and initially commercialized within the US economy by a combination of government and industry investment in research and development (R&D) coupled with subsequent investment in technology-implementing hardware and software, skilled labor, and a world-leading technology-based infrastructure, including universities and government laboratories.

But today, technology is increasingly developed elsewhere in the world, creating severe pressure on domestic industries and supporting infrastructures. Domestic fixed private investment (FPI) in physical assets such as machinery, land, buildings, vehicles, and technology is too low, and survey after survey of industry managers shows that the supply of skilled labor is inadequate. Government research institutions and R&D budgets are still oriented largely toward a set of social objectives such as defense and public health that only indirectly leverage economic growth. The end result has been sluggish output and income growth.

For the first 30 years after World War II (1948-1978), when the United States was the dominant technology-driven and thus the highest productivity economy, the average annual real growth rate of gross domestic product (GDP), the total value of goods and services produced nationwide, was 3.9%. During the next 30 years, the growth rate dropped to 3%, as the effects of globalization began to be felt. Since the 2008 recession, real economic growth has averaged 2.1%, and the Federal Reserve forecasts the growth rate to remain at around 2% for the foreseeable future. Thus, the US economy is expanding at half its postwar pace.

One component of GDP that deserves special attention is household income. In 2016, US real median household income ($59,039) finally exceeded the level reached nine years earlier in 2007, just before the Great Recession. Many analysts have characterized this milestone as encouraging, but the reality is that in addition to taking too long to occur as a cyclical rebound, this important income measure has barely nudged above the 1999 peak of $58,665. In other words, in the past 17 years, real household income has been flat.

A key reason for the income stagnation has been the anemic growth in worker incomes. Real hourly compensation in the nonfarm business sector grew 2.8% annually from 1950 to 1980, then at a 1.3% rate from 1980 to 2005, and at a 0.6% rate from 2005 to 2016. Unfortunately, under current growth policies, the situation will not improve significantly. The projected continued 2% annual GDP growth will be insufficient to raise wages or the standard of living for most of the US population and will jeopardize meeting the government’s rapidly rising obligations such as Social Security and Medicaid in the decades ahead.

The 1960s, when real GDP grew at an average annual rate of 4.5%, was the last decade of sustained superior economic performance. In that decade, increased spending on the space program, defense, and health care coupled with a surge of investment in automation and lower taxes fueled growth. The key here was the widespread automation of manufacturing, which raised productivity in the face of little foreign competition.

The acceleration of globalization in the 1970s and 1980s caused an increased rate of obsolescence of domestic economic capital. The rate of fixed private investment fell when the needed response was an increase. The result was that productivity growth fell as well.

The Reagan administration attempted to counter the economic slowdown by applying fiscal stimulation in the form of income tax cuts. But in the absence of adequate incentives for private-sector FPI to enable productivity growth, industrial output remained weak, and as a result wages and profits also were disappointing.

Moreover, this investment was now competing with growing investment across the global economy, including in emerging economies with lower labor costs. Advantages in productivity, cost, or both in these other economies significantly restrained the effectiveness of the US economy’s modest response. The bottom-line economic impact was an offshoring of jobs and significant constraints on the wages of domestic workers.

In anticipation of the policy prescriptions to be discussed later, it is important to note that in the late 1990s the FPI growth rate briefly surged to an annual rate of 9.5% as companies invested heavily in computer/information technology. However, the rest of the world quickly matched such investment, so that without a broader follow-on investment strategy FPI virtually dried up in the 2000s, and income growth stagnated.

Disastrous policy response

The policy response to globalization has been a disaster. Instead of increasing productivity-enhancing investments in technology and innovation, policy-makers relied almost exclusively on a monetary policy of low interest rates and the demand-stimulation dimension of fiscal policy. Cheap credit led to more borrowing, real estate and stock market speculation, and eventually the worst recession since the Great Depression.

The government responded to the recession with even more aggressive monetary policies that resulted in the Federal Reserve balance sheet growing from $800 billion to $4.5 trillion. The critical point is that these policies are business-cycle stabilization tools, which are useful only in addressing short-term disruptions along a long-term economic growth track. The prolonged cheap credit found its way into financial markets, which mostly benefited wealthy individuals while providing no incentives to companies to make long-term investments in research and innovation.

The bottom line is that the long-term structural policy problems caused by globalization remain unaddressed. Economic stagnation and increasing income inequality have had demonstrably negative political effects, not just in the United States, but across the industrialized world. The result has been the rise of populist political movements that clamor for trade and immigration restraints and cutbacks in government spending. The latter target is particularly destructive from a long-term growth perspective, as government spending (fiscal policy) has a critical investment component—including support for new technologies that drive sustained productivity growth and hence increased economic output over time.

The 2017 proposed Republican tax-reform bill is one consequence of this populist movement. However, its economic impact will be the opposite of what people in the lower half of the economy’s income distribution expected when they swung their support to the Republican Party. The targeted corporate income tax cuts and a regressive personal income tax adjustment favoring higher income earners will not only not increase long-term growth, but by increasing budget deficits will result in pressure by conservatives to cut programs such as Social Security and Medicare.

History shows that income inequality and political discontent go hand in hand. Based on the universally accepted metric of income distribution, the Gini coefficient, the United States ranks number one among nations with respect to income inequality.

As demonstrated by sluggish growth and world-leading income inequality, the absence of investment incentives to drive productivity growth in the face of the relentless globalization of the technology-based economy is making the lack of a real growth policy an increasingly major economic policy blunder.

Globalization and its discontents

Failure to implement an investment strategy that will raise productivity at a faster rate than competing economies and thereby allow domestic incomes to rise in real terms is not a phenomenon unique to the current US economy. Rather, it is the result of an evolutionary process repeated throughout history in which emerging economies tend to grow faster and thereby catch up to the leaders as a group. As pointed out by economist and Nobel laureate Robert Lucas Jr., the only difference with this current episode is the much faster pace at which such “convergence” is occurring.

In such global economic cycles, emerging economies initially absorb existing technology from external sources and combine it with lower-cost labor to take growing shares of global markets. As this process unfolds, poorer nations eventually acquire the ability to develop new technologies domestically, and their evolving educational institutions turn out higher-skilled labor, further improving their competitive position. China is a prime example of an economy that has reached the second phase of technology-driven economic evolution.

This process is enabled in part by the fact that emerging economies, unencumbered by past practices, adopt the strategies of industrialized nations, but with greater vigor. The result is increased productivity relative to their labor costs and thus more rapid growth in income.

In addition to emerging economies’ aggressive pursuit of greater productivity, their rates of growth are leveraged by the absence of two factors that plague established economies: the installed base effect, which reflects the difficulty of writing off existing assets that have become noncompetitive and replacing them with more productive ones, and the installed wisdom effect, which reflects the difficulty of adopting new strategies and management methods to replace those that worked well in the past but are now obsolete. Such forces of inertia help to explain why industrialized nations appear politically unable to fully grasp the effects of global convergence and enact needed reforms.

Investment-oriented growth policies that raise productivity at higher rates than competitors are necessary to solve the growth deficit problem. But in the United States, the ruling Republican Party has opted for reducing income taxes, cutting spending, and eliminating regulations, with the result of redistributing national income rather than increasing it.

The Democrats have supported several important parts of a legitimate growth strategy—education and improved digital infrastructure, and to a more limited degree, government funding for technology development—but the needed comprehensive investment-oriented growth strategy is still largely absent. Instead, Democrats have emphasized income redistribution policies, such as raising the minimum wage, which have some social justification but only a marginal effect on economic growth.

To further retard the needed policy response, politicians also bring up restricting trade, even though a viable economic future will require greater emphasis on exports, as 95% of the world’s consumers live outside the United States. Blocking imports simply institutionalizes inefficiencies in the domestic economy, guaranteeing perpetually low growth in wages.

At the state level, Republican-controlled legislatures continue to implement policies that at best preserve the low-skilled jobs of low-paid workers. Doing so serves the last gasps of old, inefficient industries that will never again be sources of significant employment and certainly not of high-wage jobs. Kansas is a graphic example of using income tax reductions coupled with budget cuts in an attempt to spur growth, but the strategy failed miserably.

Liberal Democrats are complicit in support for older industries by arguing that a resuscitation of union bargaining power is a major need for raising worker incomes. But union power has declined largely because the homogeneous labor pools (and hence the preponderance of interchangeable workers) of the industrial revolution are being replaced by an increasing number of smaller groups with differentiated skills. Such heterogeneity does not lend itself to collective bargaining. Instead, the focus of unions should be on skill enhancement as the dominant means to long-term wage growth.

Meanwhile, China and other emerging economies are investing ever larger amounts in technology to increase the competitiveness of their domestic industries through sustained productivity growth. The world now spends $1.5 trillion per year on R&D, of which the United States accounts for 30%. This means that for every dollar the United States spends on R&D, the rest of the world is spending two dollars. Even more important is the fact that in the face of relentless growth in global R&D in the past 25 years, the R&D intensity of the US economy has increased by only 5%. Meanwhile, Germany’s increase is 19%, but even this growth rate is dwarfed by South Korea’s and China’s increases of 135% and 184%, respectively.

Neither side gets it

US political leaders have not accepted the fact that the policies that led to world leadership are no longer adequate, and that these policies are powerfully influenced by a political system that is financed by groups that do not want to adapt. It is certainly easier to erect trade barriers than to invest in making domestic industries competitive in global markets. Corporations shy from adopting new technologies because of the initial costs and difficulties in learning and implementing new business models. Workers resist learning new skills because retraining is too expensive and stressful, is not rewarded, or is simply not available.

The right-wing populist groups and the more liberal wing of the Democratic Party, led by Senator Bernie Sanders of Vermont, an Independent who normally caucuses with the Democrats, both favor protectionist philosophies, as evidenced by their opposition to the Trans-Pacific Partnership. Dumping it is basically ceding the huge Asian market to China, but President Trump agreed with these groups and terminated the agreement.

Neither political party has a firm grip on what needs to change. Both forget that the United States rose to the top position among the world’s economies by out-investing everyone else from the late nineteenth century through most of the twentieth century in the four categories of economic assets that drive productivity and hence long-term growth in output and workers’ incomes: technology, fixed capital, skilled labor, and infrastructure.

Fiscal policy should play a role in business-cycle stabilization (as through the Federal Reserve), but also in long-term investment support for economic growth. The latter “investment” role has been underfunded and poorly managed for decades, and in recent years it has come under attack from conservative Republicans determined to eliminate budget deficits. Over time, a balanced budget is a good objective, but running a deficit for a while if the extra funds are used for investments in support of greater productivity is often the right policy approach.

Indeed, the key Republican strategy (one that Democrats have previously supported) of reducing corporate income tax rates has justification in that nominal rates are too high relative to other industrialized nations. However, ignored is the fact that “effective” tax rates (after deductions) are much closer to those in competing industrialized economies.

Republicans argue that companies need the additional retained earnings for investment. But over the past decade US corporations have had more than enough cash to spend on increased investment, if they so desired—and apparently they do not. A study by the economist William Lazonick in Harvard Business Review calculated that over the period 2003-2012, the companies making up the S&P 500 Index used 54% of their earnings—a total of $2.4 trillion—to buy back their own stock. Dividends absorbed an additional 37% of these companies’ earnings—a payoff to stock-market investors. This does not indicate a strong desire to increase investment in productivity-enhancing innovation. Absent appropriate incentives, corporate income tax reductions will do little to remedy insufficient investment. In fact, tax reform is actually an incentive to not do anything because companies will suddenly be reaping larger profits without having to change their behavior.

Most alarming for technology investment is that the Senate version of the Republican tax bill would eliminate the corporate research and experimentation tax credit, an extraordinary indication of the low priority that Republicans place on the nation’s need for long-term innovation investment. The United States was the first economy to implement such an incentive, in the early 1980s, but it has been aggressively adopted and upgraded by competing economies. In fact, the Information Technology and Innovation Foundation calculates that the US R&D tax credit’s relative strength has fallen from 10th among nations comprising the Organization for Economic Co-operation and Development in 2000 to a current 25th position.

The Democratic Party’s preference for income redistribution through more progressive tax policies and higher minimum wages would provide some immediate social rewards. For example, the Economic Policy Institute points out that one in every five veterans would benefit from a hike in the minimum wage. However, marginal reallocation of a stagnant economic pie contributes relatively little to long-term growth. For example, although raising the minimum wage to the recent target of $15 per hour might help workers below the poverty line, it still generates an annual income of only $32,000. The Democrats’ long-term economic growth strategy, titled A Better Deal, is similarly limited in scope, focusing mainly on raising the minimum wage, investing in economic infrastructure, and undertaking some efforts aimed at unfair trade practices.

It’s the structure, stupid

The nation is facing a structural, as opposed to a business cycle, problem. The solution is investment in productivity. Productivity is another word for efficiency. Thus, when companies produce more output with less input, they can afford to pay higher salaries. In fact, they have to pay higher wages because increasing productivity entails more technology, which, in turn, requires higher-skilled workers. The Bureau of Labor Statistics notes that jobs requiring science, technology, engineering, and mathematics skills account for one out of 10 jobs in the US economy, and their average pay is 1.7 times the economy-wide average.

Ironically, the historically persistent argument against automation is that it creates unemployment. The scenario that a few highly paid skilled workers will replace many lower-skilled workers would be correct if market size remained constant, but producing goods at lower cost enables a company to expand market share and employ a larger workforce. The policy imperative is to increase domestic worker skills to levels that are not easily accessible elsewhere in the global economy, thereby providing domestic investment incentives for the world’s most productive companies.

But upskilling workers at the historical pace is not sufficient to guarantee US success in today’s global economy. Real compensation closely tracked labor productivity during the three decades after World War II, as would be expected during a period when the US economy did not face significant foreign competition. But beginning in the 1970s and 1980s, as one economy after another acquired the ability to increase productivity while benefiting from lower labor costs, US workers were no longer able to command higher salaries commensurate with historical productivity gains. Global corporations benefited from labor arbitrage by moving operations to economies where a given level of productivity could be obtained for the lowest cost.

Glimpses of what types of policy initiatives are needed to grow productivity faster than competitors can be found in some earlier policy initiatives. In the 1980s and early 1990s, Congress created several mechanisms to help industry develop and commercialize breakthrough technologies. The first major pieces of legislation, the Bayh-Dole Act and the Stevenson-Wydler Technology Innovation Act, both enacted in 1980, facilitated the transfer and commercialization of federally developed technology to the private sector.

The Federal Technology Transfer Act of 1986 promoted technology transfer to small firms. It also created the mechanism for forming cooperative research and development agreements, or CRADAs, to manage intellectual property in projects conducted jointly by industry and the national laboratories. In the same vein, passage of the National Cooperative Research Act of 1984 removed concerns over antitrust restrictions related to private-sector cooperative research. Such cooperation is important in the early phases of modern technology development where long investment time horizons and high technical and market risk combine to reduce private investment in so-called proof-of-concept technology research.

At the innovation policy level, Stevenson-Wydler also established the Technology Administration, an agency within the Department of Commerce, to develop and coordinate technology-related economic growth policies, marking the first federal institutionalization of technology-based economic growth policy. However, it was disbanded by the America COMPETES Act of 2007.

In 1988 Congress passed the Omnibus Foreign Trade and Competitiveness Act, which established two institutional mechanisms to implement federal support for technology development in support of economic growth: the Advanced Technology Program, which was conceived as a civilian analog to the Defense Advanced Research Projects Agency (DARPA) and thus funded early-phase technology research, and the Manufacturing Extension Partnership (MEP), which establishes centers across the country to provide local technology support to small firms for acquiring technical knowledge and related management expertise aimed at improving productivity and competitiveness.

Unfortunately, the policy initiatives that required direct funding, with the possible exception of MEP, were not only underfunded but were relentlessly attacked by conservative Republicans. They maintained that basic scientific research is a public good that should be funded by government, but that technology development is a completely private good and therefore should be supported by the private sector only.

In reality, technology is developed through a complex sequence of phases, becoming progressively more applied until it is ready for the market. The early phases, usually centered on proving a concept or developing a technology platform, are quite different from the final more applied phase targeting actual product development. The chances of success are smaller, progress is slow, and the results rather easily spill over to companies not investing in the research.

Because it is difficult to capture all the value of the early phases, companies are hesitant to support this work, with the result that there is considerable private-sector underinvestment. This is an example of what economists label a “market failure.” Technology policy experts call this early-phase barrier to technology development the “valley of death.” The Advanced Technology Program was designed to address this problem. However, Republican members of Congress denied the existence of a market failure by not recognizing the difference in investment characteristics between the early and late phases of technology development.

The next significant phase of government action came after the economic collapse in 2008. In the face of the most serious global recession since the Great Depression, Congress made a modest and short-lived attempt to use fiscal policy as a true economic growth instrument. The centerpiece was the American Recovery and Reinvestment Act of 2009, funded at $787 billion, with significant portions allocated to economic infrastructure and science and technology research. It was the beginning of a needed upgrade, but there has been no follow through.

A comprehensive strategy

A successful innovation and economic growth strategy requires coordinated action on four fronts:

The first three categories are understood at least at a general level by policy analysts and stakeholders, even though substantial increases in all three are required. The fourth, however, is less familiar, more complex, and continuously evolving. It includes infrastructures such as digital communications networks and data storage, and research-oriented institutional arrangements such as research consortia, innovation clusters, incubators, accelerators, research data base standards, and “infratechnologies” such as science and engineering data, measurement/testing methods and calibration tools, and product-acceptance testing standards.

Continual advances in technical infrastructure and its broad implementation will be required to maintain competitive positions in the forthcoming Industrial Internet of Things. The supporting information technology (IT) infrastructure will require huge investments in information and communications technologies to integrate not only manufacturing supply chains but after-sales service and software updates for product and service systems. Such a dynamic extension of current product-service supply chains will give new meaning to the concept of technology life cycles and will require a significant upgrade in supply-chain management techniques.

In contrast, the obsessive overemphasis on monetary policy, which is not even a growth policy tool, and the misguided assertion that demand stimulation through income tax reductions will create significant and sustained investment incentives for industry, need to be discarded. They are short-term actions capable of adjusting only fluctuations of the business cycle but fail to address the heart of the growth problem.

The development and commercialization of radically new technologies can take decades—well beyond the investment time horizons and, in fact, the R&D capabilities of industry acting alone. Government support of basic scientific research is part of a long-term vision, but government must also take additional actions to enable the nation to respond to the changing global competitive environment characterized by ever more complex technologies and shorter windows of opportunity for achieving competitive positions in global markets.

Competitive success at the national level—and also the regional level in larger economies such as the United States—is determined to a significant extent by the effectiveness of the collective productivity that comes from geographic concentrations of small and large firms and a technical infrastructure capable of leveraging technology development and commercialization. These “innovation clusters” are appearing in all technology-based economies.

US support for such clusters has lagged behind many competing nations. Congress did take a useful step in the Revitalize American Manufacturing and Innovation Act of 2014, which authorized a National Network for Manufacturing Innovation. Now called Manufacturing USA, the network’s major purpose is to co-fund with industry a series of Manufacturing Innovation Institutes (MIIs). Most of the MIIs were created by the Obama administration under an ad hoc program using funds from the major R&D agencies, primarily the Department of Defense and the Department of Energy.

But the legislation failed to provide direct funding for the network, so it will be up to the mission R&D agencies to fund and manage future MIIs. As of the end of 2017, 14 MIIs have been established, but 40 to 50 of them should be the target to have significant and broad long-term national economic impact. Furthermore, only two MIIs are located in the western half of the country, leaving a large swath of the nation without this important resource for regional economic growth. Further, funding these MIIs through the defense and energy departments means that the portfolios of research projects will reflect those agencies’ needs and therefore may be suboptimal for stimulating overall economic growth.

Promoting regional and sectoral clusters of firms in high-tech supply chains addresses the reality that modern technologies are complex systems. Such systems require research in a variety of technical disciplines, which mandates coordination and efficient interfaces among a large number of companies making up the evolving new supply chains. The inherent complexity means that co-location synergies among component suppliers and system integrators are significant for both conducting research and integrating the results into the evolving technology system.

Regional innovation clusters also boost overall economic efficiency by offering large and diversified pools of skilled labor. Workers can move among companies much more efficiently as labor needs shift. Toyota recently announced that it would invest $1 billion over the next five years in the development of artificial intelligence and robotics. The company chose the mother of innovation clusters, Silicon Valley, as the location for this research because of the unparalleled availability of the needed research talent.

The message for policy-makers is that investment creates productive assets, which in turn enable sustained growth. Unfortunately, neither political party fully appreciates the investment requirements required to create advanced technologies and develop them into forms that enable market applications.

The Democrats at least partially recognize the strong public-private good character and the complexity of the early phases of R&D, as evidenced by President Obama’s support for innovation clusters. They also have introduced legislation to provide infrastructure support for small businesses and entrepreneurs. Unfortunately, these efforts are largely ad hoc and incomplete.

The Republicans are further off course, implicitly claiming that government should support development of only those technologies useful to a government mission such as defense. They see no need to nurture the development of technologies that will contribute directly to economic growth.

The final critical policy point is that technologies evolve in cycles. Information technology moved from mainframe computers to personal computers to smartphones. The generic technology platform remained the same, but each of these IT applications differed in hardware, software, and markets. The challenge for policy-makers is to understand how each of these developments differs and to adapt federal supportive activities accordingly.

And as described above, even within specific technology life cycles there are stages of development that require different types of assistance. Investing in the assets necessary to develop and commercialize new technologies that drive productivity growth requires policies that accelerate the replacement of existing capital stocks with new intellectual, physical, and infrastructure assets. The most important policy tools are funding for early-phase technology research, education and training, technical infrastructure, and tax incentives for applied R&D and capital investment.

This policy mix stands in direct contrast to fiscal stimulus through corporate income tax cuts emphasized in the current tax reform effort. The nation’s focus should be on productivity-enhancing investments, not a company’s bottom line—the latter will be improved only over time by the former. The most urgent need is increased investment in infrastructure, particularly the high-tech infrastructure necessary for a modern economy. The cost will be high, but failure to make the needed investments is a recipe for a future of continued economic decline and falling incomes. US policy-makers need to understand that with respect to efficiency, governments compete against each other as much as do their domestic industries.

Rethinking Infrastructure in an Era of Unprecedented Weather Events

The United States is at an infrastructural crossroads. First, the climate is changing faster than built infrastructure and the institutions that manage and maintain it. Recent extreme weather events highlight the precarious state of the nation’s infrastructure and the ability of cities to adapt to climate change. After the nation in 2016 broiled through its hottest summer on record, 2017 began with one of the wettest winters on record for California and the Pacific Northwest. The 2017 hurricane season proved to be the most devastating and costly in the nation’s history. Hurricanes Harvey in Texas and Irma in Florida inflicted as much as $290 billion in damages. In the past 60 years, there has never been an Atlantic hurricane as intense as Maria was over the US territory of Puerto Rico. Two months after the hurricane, fewer than half of Puerto Rico’s 3.4 million residents had regained electric power. According to some estimates, Maria may have set the Puerto Rican economy back by a quarter century in just 12 hours. And adding to the list of miseries, a series of wildfires starting during volatile weather conditions in October devastated large areas of northern California and claimed at least 43 lives.

Second, US infrastructure—in such diverse sectors as transportation, energy, and water—needs billions of dollars of investments to merely maintain current service levels, according to the American Society of Civil Engineers. Aging infrastructure, based on decades-old assumptions about societal needs and environmental conditions, must continue to deliver services to communities with changing needs, demands, technologies, and values. Combined sewer-storm water systems, for example, were the standard for many cities to manage wastewater and storm water in the late 1800s and early 1900s. However, due to changes in public health and environmental concerns since these systems were built, most cities now recognize that the cost savings of combining these systems is outweighed by the hazards created when sewage overflows into waterways during heavy precipitation events. As a result, cities such as Portland, Oregon, and Philadelphia have had to spend millions of dollars over the past 25 years to retrofit their combined systems to comply with the US Clean Water Act and other environmental regulations. As cities and states work to deliver services, they must also deal with the legacy of these existing outdated systems.

Finally, over the coming years there may be massive investments in the nation’s infrastructure. Cities, states, and regions will continue or ramp up efforts to maintain and retrofit infrastructure to deal with increasing demands, changing populations, and the specter of climate change. The states and regions affected by recent extreme events will recover and rebuild. Meanwhile, the federal government has proposed to invest up to $1 trillion in infrastructure while at the same time, according to an August 2017 executive order, reducing requirements for federal spending on infrastructure to account for climate risks.

How cities, states, regions, and the federal government navigate these key issues will determine the path taken at this crossroads. Will it be a path that uses the technologies and climate conditions of the twentieth century to design for tomorrow? Or one that rethinks how infrastructure is designed, managed, and maintained for the technologies, societal needs, and hazards of the twenty-first century?

We examine some of the underlying social, ecological, technical, and institutional issues that often seem to set infrastructure up for failure. We focus primarily on failures in the context of climate change and extreme weather events. We review several cases with an eye toward the lessons that policy-makers, infrastructure engineers, and managers can glean to conceptualize, design, build, and maintain the infrastructure of the future. And we then explore emerging innovations that provide insights into a more resilient future.

Climate change and extreme weather grab headlines and present a fundamental challenge to the ability of infrastructure to protect communities. But beneath the seemingly endless cascade of catastrophes lie consistent, systemic failures in current approaches to infrastructure. One common failure is an overconfidence, bordering on hubris, in the ability to tightly control complex social and ecological systems through the management of technological systems. Another is the failure often associated with managing interdependent infrastructure systems. And there are failures in the ability of institutions that manage infrastructure to generate, communicate, and utilize knowledge. This list of failures is not exhaustive, nor is it meant to be. Instead, the discussion focuses on these consistent drivers of infrastructure failure that cut across multiple infrastructure types, extreme event categories, and jurisdictions.

To reveal and understand these drivers, we view infrastructure as not just the built hardware. It is also the institutional rules, norms, knowledge, and standards that design, maintain, and manage the infrastructure; the social norms and expectations about the use of services delivered by infrastructure; and the ecological systems that are designed or managed, or both, by infrastructure. Infrastructure, then, comprises not simply technical systems, but interconnected social, ecological, and technological systems.

One common failure is an overconfidence, bordering on hubris, in the ability to tightly control complex social and ecological systems through the management of technological systems.

Control of complex systems. In his 1989 collection of essays, The Control of Nature, John McPhee examines how humans attempt to exert control over natural systems. He describes efforts to fend off lava flows in Iceland and curb landslides to make way for development in greater Los Angeles. But it is in “Atchafalaya,” an account of the US Army Corps of Engineers’ actions to prevent the Mississippi River from changing its course, that he most effectively captures the futility of human efforts to control complex systems.

McPhee illustrates how the Army Corps, with support from local politicians and communities, designed the Old River Control Structure to regulate the flow of water from the Mississippi River to the Atchafalaya River. Without this structure, the flow would increase over time, eventually resulting in the Mississippi changing its course. Needless to say, this would be inconvenient for urban and rural communities, including New Orleans, that rely on the river and its various engineered structures for irrigation, flood control, and commerce. As McPhee notes, “for nature to take its course was simply unthinkable.”

Engineers designed the Old River Control Structure and other flood control systems in the region to handle certain degrees of flooding, calculated using historic precipitation data and water flow rates. Yet as geologists and hydrologists know, the Mississippi River and Delta comprise a complex and dynamic system that has evolved and meandered over time. Attempts to control the system have “harnessed it, straightened it, regularized it, shackled it,” as McPhee said. When elements of the system fail, however, the results are catastrophic, as demonstrated during the flooding events along the Mississippi in the 1990s and with Hurricane Katrina in 2004. Dams along the river system also starve the Mississippi River Delta of silt that is needed to replenish the wetlands, an invaluable source of coastal storm surge protection. In addition, sea level rise further erodes the wetlands. The conclusion of McPhee’s essay still rings alarmingly true: “It’s a mixture of hydrologic events and human events. It’s planned chaos.”

This story illustrates how infrastructure has been traditionally designed to manage environmental hazards or deliver a narrow set of services. Society builds infrastructure to remain structurally or functionally sound up to a particular severity of event, such as a 1-in-100 year or 1-in-500 year intensity rainfall. This so-called fail-safe approach to infrastructure design has led to large and often oversized infrastructure, with little to no thought given to how to manage the consequences of failure. Such designs also often focus on a single service (such as flood control) at the expense of other potential services (such as thermal regulation, recreation, and coastal storm surge protection). With the uncertainty that climate change imposes on the frequency and intensity of extreme events, this risk-based model of infrastructure design needs to be questioned. The barriers against building larger infrastructure may be prohibitive and the potential for failure is likely to increase.

California experienced a record wet winter in 2016-2017, receiving more than 400% of the average amount of precipitation. Cities and towns from Humboldt in the north to Los Angeles in the south were flooded, sinkholes swallowed cars, residents were evacuated, and roads and schools closed throughout the state. These extreme precipitation events were punctuated on February 12, 2017, when 188,000 people around the city of Oroville were ordered to evacuate their homes because the emergency overflow spillway on nearby Oroville Dam appeared to be failing, threatening to flood local communities. This marked the first time the spillway had been used since the dam’s construction in 1968. Fortunately, the dam eluded a massive failure, but the incident underlined the degree to which a fail-safe approach to infrastructure seems increasingly tenuous as design conditions are more routinely exceeded in a changing climate.

Interdependence of infrastructure. Although it is clear that infrastructure components are interdependent, they are often designed, managed, and maintained as separate entities. The transportation bureau manages the transportation system. The storm water bureau manages storm water. And so on. Yet the extent of these interdependencies is likely increasing, creating complexities that are inimical to the current understanding of how perturbations cause large-scale outages. It is well established that the services provided by one infrastructure are required for others to function (for example, power generation requires water, and traffic signaling requires electricity). What is less well known is how the decades and centuries of building and interconnecting infrastructure, embedding new hardware, and lately connecting with information and communication technologies have resulted in a kludge of unpredictability. The 2011 Southwest blackout, for instance, shows how vulnerabilities can propagate across infrastructure. What began as a minor outage in Arizona cascaded to Mexico and Southern California over the course of 11 minutes. The blackout ultimately left roughly seven million people without power. It resulted in loss of transportation services as well as water treatment capacity.

More recently, the destruction of Puerto Rico’s energy system by Hurricane Maria not only resulted in the largest power outage in US history, but it also had compounding effects on other critical infrastructure necessary for relief efforts after the disaster. The island’s entire communication infrastructure, including cellular networks and telephone lines, broke down, rendering emergency managers, government agencies, and Federal Emergency Management Agency (FEMA) officials unable for days to share information about the storm’s damage and move rapidly to implement relief efforts on the ground. The island’s main airport could not function without power or communication, and thus for days it could not receive airplanes with shipments and people could not leave. The power outage also affected the island’s ability to maintain basic services for the population, such as providing clean water, maintaining life-support health equipment, and pumping flood waters.

Part of the vulnerability of the island’s energy grid was its own interconnectedness and lack of redundancy. The centralized electric grid ran almost entirely on fossil fuels, which are entirely imported, and electricity was transmitted through a decaying system of towers and distribution cables. Maria’s 155-mile-per-hour winds destroyed more than 200 transmission towers and hundreds of miles of transmission lines, as well as thousands of distribution lines that connect individual households and businesses to the grid. Before the hurricane, the agency responsible for the governance of the electricity system, the Puerto Rico Electric Power Authority, was in great debt (it owed $9 billion of Puerto Rico’s more than $74 billion debt) and could not maintain the grid or have backup systems for redundancy, especially for more remote rural areas. The only backup that residents and businesses had were electric generators that run on gas or diesel, but these fuels had to be imported from the US mainland and transported from shipping ports to gas stations. More than a month and a half after the hurricane, only 42% of the power generation capacity had been restored, leaving Puerto Ricans intensely aware of how dependent their resilience is to this infrastructure.

Knowledge systems. The effects of extreme weather events on infrastructure have also exposed a number of failures in institutional knowledge systems: the organizational practices and social structures that produce the information, data, and expertise on which engineers, designers, and decision-makers rely. A post-Hurricane Katrina report by the American Society of Civil Engineers in 2007, for example, showed how the combination of inadequate knowledge and unfortunate choices at all levels of responsibilities led to the engineering portion of the disaster, including miscalculations on the size of the levees and flawed models of variability of soil conditions in New Orleans. The complications of multiple and overlapping political and legal jurisdictions, and the weak institutional authority of the New Orleans Hurricane Protection System, led to a failure in the detection of emerging vulnerabilities in the levee structure.

To cite another example, the Phoenix metropolitan area in 2014 experienced a 630-year rain event in August followed by a 984-year event in September, the latter the highest amount of precipitation ever recorded for a single day. Both events caused flooding of Interstate 10 and major traffic disruptions. The flooding was not the result of the breakdown of hardware. Instead, the technology functioned as it was designed to do. The pumps, which were designed for much lower intensity rainfalls, automatically turned off to protect themselves from overheating. These design conditions are set through a number of processes within the institutions that manage infrastructure, but in these cases they failed to take into account the most extreme weather events.

Inefficiencies in the knowledge systems supporting the analysis and communication of risk distribution in urban areas also limited the ability of city officials in Houston and San Juan to appropriately communicate the risk and reduce the vulnerability of their populations to extreme weather variability. Hurricanes Harvey and Maria revealed how little awareness people had of their own vulnerability. Homeowners living in flood zones were not aware of their exposure to high flood risks. Though most people in these areas were likely aware of their exposure to flood events that could occur during a 100-year flood, because they are required to purchase flood insurance from FEMA, the past hurricane season brought multiple 500-year floods. Furthermore, for many cities, including San Juan, the FEMA flood maps that determine where flood hazards are located are outdated, and thus many residents were not aware of the higher risks they were facing with these extreme events. Similarly, many homeowners in Houston did not know they had bought homes in marshlands that were intended to flood when the bayou system flooded.

A recent analysis by the US Department of Homeland Security revealed that 58% of FEMA flood maps are either inaccurate or out of date. In the wake of Superstorm Sandy in 2012, the flood maps for New York City were famously exposed as woefully outdated, with the most recent update coming in 1983. Yet even if the flood maps were 100% accurate and up-to-date, they are based on retrospective data and still would not account for future conditions such as climate change. These seemingly mundane codes and standards carry embedded assumptions about climate and weather conditions that form the DNA of the nation’s infrastructural systems that support modern life.

These examples show how infrastructure failure is a complex process that involves the breakdown of not only physical hardware but also the institutions that manage the hardware, as well as post-disaster recovery.

Given the uncertainty of climate change, the degraded status of US infrastructure, and the potential for large investments in rehabilitation and new construction, the processes that society uses to design infrastructure should be fundamentally questioned. Climate change can introduce so much uncertainty that simply shifting probability distributions for future events and continuing with standard practice is likely no longer sufficient. At some of the more severe ends of climate forecasts, the infrastructure components that would need to be designed are potentially so large, costly, and aesthetically unpleasing—and possibly technically infeasible to construct—that current forms of infrastructure in some situations may be obsolete. New models are needed that balance fail-safe designs with other resilience strategies, including green infrastructure and safe-to-fail systems that do not promise absolute protection but result in limited damage when they do fail. Green infrastructure systems have been used across the nation to help retain water and thereby reduce the potential for flooding. New models for infrastructure will be need to be smarter about recognizing the consequences of failure, allow infrastructure to fail, and manage the consequences of failures.

Approaches, old and new, to urban flooding provide some promising examples to building more resilient infrastructure. In the 1960s, a controversy emerged between the community of Scottsdale, Arizona, and the Army Corps of Engineers about how to best manage flooding in a rapidly urbanizing area along the Indian Bend Wash. The traditional approach, advocated by the Army Corps, was to turn the wash into a concrete-lined channel. Think of the Los Angeles River in the famous Terminator 2 scene that has T-1000 driving a semi-truck in pursuit of Jack Connor on a dirt bike. The Scottsdale community successfully fought the Army Corps to design and build an 11-mile-long greenbelt—a series of parks, ponds, and, of course, golf courses (this is Arizona after all)—that allows the wash to flood without damaging the surrounding property.

This type of safe-to-fail design that allows for some flooding has been adopted elsewhere as well. The Netherlands, which is precariously located below sea level and has historically done as much as possible to prevent flooding, recently implemented what it calls the Room for the River program. Instead of building ever bigger levees to hold back water, the Netherlands manages the consequences of failure by letting farmers use the land along flood-prone waterways and reimbursing them when crops are damaged. US cities are also giving more room for flooding along rivers or coastlines. After Sandy, New York City offered buyouts to Staten Island residents on the shoreline whose homes were destroyed or threatened. And in Portland, Oregon, the Bureau of Environmental Services and Portland Parks and Recreation collaborated to purchase the homes of residents located in a flood-prone area along Johnson Creek, a tributary of the Willamette River. This area on the east side of the city had flooded consistently over previous decades, including the Great Flood of 1996. The city restored this area of the floodplain in 2012 to create the Foster Floodplain Natural Area, which allows the area to flood and thereby helps to alleviate flooding further downstream.

These examples demonstrate how infrastructure changes require institutional and knowledge systems changes. For example, knowing how to design ecological functions such as storm water regulation or thermal regulation through the planting of trees and other plants is not only a technical or ecological issue. It also necessitates new forms of coordination between governmental organizations responsible for delivering different kinds of services with different sources of funding. Storm water management bureaus, for instance, are often allowed to spend ratepayer monies only on storm water benefits. As cities look to green infrastructure for thermal regulation to ameliorate urban heat island issues, they must also overcome institutional barriers to designing services.

Emerging data and communication technologies can also help cities get smarter about infrastructure design and maintenance. Advocates of initiatives such as “smart cities,” which rely on big data analytics, and the “internet of things,” which harnesses advanced digital tools and devices, view digital technologies as a connected infrastructure of data collection, use, and interpretation that can optimize the operations of a city toward smarter economies, environmental practices, and governance. For instance, early warning systems for coastal flood hazards that include a network of data sensors throughout the city can help flood and emergency managers better understand how flood waters are distributed and what people and places are more at risk. Whereas initiatives to create smart cities have the potential to help communities anticipate events and develop adaptation strategies to climate change, their effectiveness rests on advances in a multiplicity of technological as well as cognitive, social, and institutional factors that are embedded in these smart systems’ technologies. Nevertheless, if used in meaningful ways, these innovations in data systems and digital technologies have the potential to help protect people, improve their quality of life, and increase infrastructure resilience to climate change. Rather than viewing such technologies and data analytics as technological fixes, they can be seen as serving as opportunities to upgrade decisions when appropriately embedded in institutional decision-making contexts.

Puerto Rico may provide a case in point. As state and federal agencies are moving quickly to fix the energy grid and deliver power to millions of people, many policy-makers, politicians, energy experts, and residents recognize that this will not be a long-term solution. Instead, they are viewing this breakdown of the infrastructure as an opportunity to reconstruct the system using more sustainable and clean energy options. After seeing—and in some cases experiencing firsthand—how fundamental energy is for the resiliency of the island, local and national leaders are calling for strategies to phase out the twentieth-century centralized power model and move toward more resilient alternatives such as solar micro-grid technologies. Such energy transformation will require not just new ways to redesign the technological aspects of the infrastructure, but innovations in the governance of the infrastructure. In this light, Luis A. Avilés, a former chair of the island’s electric power authority and current law professor at the University of Puerto Rico, has called on Congress to design and implement island-centric energy policy and economic incentives to ensure that Puerto Rico and other US territories can get the energy they need while not being so dependent on the mainland.

These innovations display a consistent ability to look beyond narrow technical design decisions to broader rethinking about the social, ecological, and technological means and arrangements that provide services to communities. To build more resilient infrastructure, cities, states, and regions will also need to reconceptualize what services they provide, to whom, and how they arrange social, ecological, and technological systems to do so.

Moreover, decisions today will create a new infrastructural legacy that will last well beyond today’s problems. Evidence continues to accumulate that many components of infrastructure are unable to cope with more extreme events and that building bigger and stronger simply may not be feasible. The inability of infrastructure to handle climate extremes is rarely an issue of poor engineering or faulty technical designs. Instead, it’s that infrastructures were designed for different weather patterns as well as different social values and demands. And into the future, these events are expected to become more frequent, intense, and unpredictable. Demands will increase and values will surely evolve. This future should give pause to question whether the models of infrastructure that scientists and society have come to rely on are sufficient going forward.

To address the interdependence of infrastructure systems, the institutions that build, manage, and maintain them must explore new models of institutional design.

Resilience must be understood as the capacity of institutions and the infrastructure they oversee to adapt to unpredictable and changing conditions, not just in terms of the infrastructure hardware, but also in terms of the people who rely on the systems and the institutions that manage them. Both the failures and positive innovations we have discussed here highlight the need to take a broader view of infrastructure as dynamic systems in turn comprising interconnected social, ecological, and technological systems. As such, so too must society look for ways to foster resilience across these systems.

Toward that end, we suggest the following actions:

How to Reinvigorate US Commercial Nuclear Energy

December 2017 marked the sixtieth anniversary of commercial nuclear energy in the United States, which began with the opening of the Shippingport Atomic Power Station in Pennsylvania. It also marked the seventy-fifth anniversary of the scientific birth of nuclear power, when Chicago Pile-1, built as part of the Manhattan Project and located in an abandoned squash court underneath Stagg Field in the middle of the University of Chicago campus, achieved the first controlled, self-sustaining nuclear chain reaction. Over these years, US policies and innovations successfully underpinned the growth of global commercial nuclear trade, while addressing the threat of non-peaceful applications of commercial technologies.

Today, the global nuclear enterprise comprises mature and growing global markets and a diversity of international suppliers. Trade policies focused principally on nonproliferation, which evolved and were implemented incrementally over decades, can no longer provide the basis for US industrial engagement and competition. The United States no longer monopolizes leadership and influence in global nuclear energy markets, and if current trends continue unabated, its role will continue to decline.

We believe that global nuclear energy markets are too large an opportunity, the potential strategic and economic benefit of US involvement in these global markets too broad, and the potential implications for future US domestic nuclear energy deployment too significant to be ceded to global competitors—at least not without thoughtful and complete strategic analyses of, and debate over, the potential consequences.

The global civil nuclear energy supply chain is a mature industrial enterprise servicing not only existing but a growing number of new markets. With an estimated value of $2.6 trillion over the coming 20 years, this supply chain includes new reactor development and construction, myriad fuel cycle services for existing reactors, power generation equipment, professional services, training, reactor life extension, and decommissioning services. Where once the market action was taking place mostly in the United States, now the markets are principally based elsewhere, with 440 commercial power reactors operating in 31 countries. State-owned enterprises in Russia, China, and Korea provide the majority of new reactors, with India gaining strength through its own domestic market. Flagship US technology providers are subsidiaries of foreign industrial giants or operate as closely aligned strategic partners. Where once US industry held the vast majority of nuclear-qualified manufacturing (so called N-stamp certification, issued by the American Society of Mechanical Engineers to indicate a level of quality assurance appropriate for nuclear applications, or similar quality certification), it lost its majority in 2010.

Despite the globalization of the commercial nuclear trade and declining US nuclear power plant exports, the United States is still home to the largest nuclear generating capacity of any single nation, and the sheer size of its fleet has enabled a relatively robust domestic supply chain for reactor and operations services. But China will likely surpass the United States in number of reactors deployed within 20 years, and the sustainability of the US supply chain may be threatened if the recent trend of premature closure of domestic commercial nuclear power plants continues.

Meanwhile, nuclear deployment business models are emerging that would have been difficult to imagine in past decades. Korean enterprises are supplying the first nuclear power units in the Middle East, to the United Arab Emirates (UAE). These systems are based on US technology and use US engineering and management contractors. A planned nuclear renaissance in the United Kingdom is to be supplied in part by European reactor technology, built by an international workforce, and financed in part by Chinese investment. Russian technology is capturing strategic global market share using so-called build-own-operate business models backed by Russian state financing, assuring a lengthy Russian presence where these systems are sold (as with Korea and the UAE reactors). US leadership in the development of norms of global nuclear trade involving quality, safety, operations, and nonproliferation, as well as its own experiences in deployment, has paved the way for these competitors. For example, the Russian state-owned enterprise Rosatom is leveraging commercial nuclear market leadership to pursue broader advanced manufacturing and technology leadership, while Korea followed the early market model of the United States and used nuclear trade to enhance a broader manufacturing base. (Korea recently announced a phase-out of new domestic nuclear energy builds, even as it aims to maintain its growing export presence and efficiency and complete domestic reactors already under construction.)

International consortia and partnerships are displacing US dominance in the international nuclear energy trade and are instances of increasingly familiar trends of manufacturing specialization that drive transnational sourcing in many manufacturing markets. These partnerships and sourcing approaches are creating exceptionally cost-competitive and reliable nuclear deployments in the global market that significantly outperform US efforts. Where the first reactors to be ordered and built from scratch in the United States since the Three Mile Island accident in 1979 are behind schedule and projected to cost over $11,000 per kilowatt hour, if they are finished at all, Korea and Japan have recently built reactors at a cost in the neighborhood of $4,300 per kilowatt hour. The elements of this stunning difference in cost have not been completely quantified, but likely can be attributed to factors including differences in regulatory approach, labor costs, and availability of public-private financial vehicles. Less tangible, but perhaps equally important, is that international supply chains (including professional and construction services and management) have had several decades to develop knowledge, capability, expertise, and mature designs that result in cost and construction risk reduction, while US construction supply chains have largely atrophied.

Yet US private capital totaling $1.3 billion from dozens of companies is beginning to trickle into the development of advanced nuclear power systems, formerly the realm of only a few state-sponsored efforts. Compelling global market potential is the draw. Entrepreneurs are attracting venture capital as they line up to develop systems that can meet the growing appetite for clean energy for a world with a changing climate and a population that could reach 10 billion by 2050. The US Department of Defense is also once again assessing the feasibility of land-based nuclear power plants, mostly small “micro-reactors,” for military base electricity needs, and this could provide another market for private vendors and innovators. These opportunities offer a sharp contrast to the rather uniform landscape of nuclear energy development and deployment of the 1960s, when the basis of present US export policy was established.

The advanced reactors envisioned by today’s US-based private-sector innovators are targeted to overcome many of the operational challenges of existing systems through the use of such features as passively safe designs, simpler system architectures, advanced monitoring and controls, and more robust nuclear fuels. They are also being designed to provide both baseload power to the electrical grid and zero greenhouse-gas process energy and electricity for growing industrial energy needs in the manufacture of steel, fertilizer, bulk commodity chemicals, and other energy-intensive applications. These integrated systems are being designed to operate within a dynamic energy grid alongside fossil and renewable energy systems, and could provide new approaches to providing increasingly valuable grid stabilization services to help overcome intermittency challenges of wind and solar energy. For entrepreneurs, these new integrated systems provide avenues to dramatically expand global markets for nuclear energy. Our colleagues at Idaho National Laboratory have estimated the global market potential for nuclear could grow from a total of $2.6 trillion over 20 years to over $4 trillion if reactors were integrated as clean energy sources into industrial processes. US innovators may hold a key competitive advantage in developing these integrated nuclear systems if research continues to mature in areas such as advanced catalysts, high-temperature nuclear reactors, and other technologies that enable a more efficient use of nuclear-grade process heat for a variety of manufacturing industries. Such systems may provide the differentiator that makes the next generation of US advanced reactors and services more desirable than the stand-alone electricity producing reactors that dominate production in the global market.

The United States finds itself in a dramatically different competitive position and market context than at the birth of the industry. The global market has changed, the domestic market has changed, global public attitudes have changed, and norms of global trade have changed. Amidst such ferment, the future of the US domestic nuclear energy market is uncertain at best. In the past five years, premature closures have been announced for or completed at 11 sites in 10 states, totaling 14 reactors and almost 14,000 megawatts of baseload electricity. A flagship US product designer and supplier, Westinghouse, has gone bankrupt. Two of the four nuclear reactors under construction in the United States have been cancelled because of substantial cost overruns. Meanwhile, foreign nuclear markets expand. Our view is that the strategic importance of a US domestic commercial nuclear energy industry, serving domestic and international markets, and its relations to supply chains serving US nonenergy markets and foreign markets, needs to be actively debated among US policy-makers. A decline in the US domestic nuclear energy industry and coincident decline in the nation’s commercial nuclear export presence has potential broad national strategic implications in the economic, defense, regional geopolitical, and energy realms. Yet the strategic consequences of a continued erosion of the US competitive position in nuclear energy has not been quantitatively assessed in context with global market trends. Clearly identifying, and where possible quantifying and prioritizing, the elements of strategic value of both domestic and export industries, across the breadth of the nuclear supply chain, for commercial nuclear energy is essential for charting a practical and efficient path to realizing the strategic potential of US industrial leadership in global commercial nuclear energy markets.

With this in mind, we present six strategic principles for assessing the US position in the global market. These principles can aid in understanding the relative importance, and possible cost-benefit, of a variety of strategic considerations, and help assess whether, and how, to address the US competitive position in global nuclear energy markets.

Globally, nuclear energy will be a necessary element of national energy infrastructures, and therefore markets will likely continue to expand. Reliable, zero-carbon-emission electricity and energy for industry are key to human well-being and societal stability, and technical and industrial leadership in assuring an energy-rich future will be rewarded economically and politically. As we have emphasized, there are strong reasons to expect that global demand for nuclear energy will expand well into the future, and that demand for servicing the existing fleet will continue for decades. In many instances, countries adopting nuclear will do so as a means of ensuring their security through access to reliable and diverse energy sources while simultaneously building a skilled technology-based workforce. The global market size and potential—for new capacity and operation of existing capacity—is likely substantial and should be a focus of future national policy.

Considerable financial gain can be realized across the broad supply chain that comprises commercial nuclear energy. The potential financial reward for investing in nuclear energy has been drawing private-public investment and spurring international industrial partnership models and private-sector innovators similar to those found in many other global industrial markets. The markets encompass new construction and technology, industrial applications, manufacturing of components, operations and other professional services, and decommissioning. Importantly, the export opportunities afforded global growth in nuclear energy (for example, Saudi Arabia’s recent announcement that it would procure 17.6 gigawatts of generating capacity) should not be viewed solely in terms of providing a new reactor or reactor system. Various nodes of the total supply chain, such as individual components, advanced fuels, intelligent and autonomous control systems, and operations expertise, may provide market niches, competitive positions, and, eventually, national strategic value. This may be particularly true when, as may often be the case in the future, US companies are competing with state-owned foreign enterprises in nationally controlled markets, and advanced components and approaches offered and manufactured by US companies can be integrated into non-US technology platforms. For innovators and existing suppliers, the opportunities across the broad supply chain may include a breadth of components, materials, and services supporting various advanced reactor designs.

Tapping the breadth of global markets in nuclear technology supply chains could also be one avenue to help reenergize domestic manufacturing expertise and exports. Restoring balance of trade and creating middle-class jobs are key objectives for an expansion of US manufacturing, with the US Department of Commerce estimating that 5,000 to 10,000 domestic jobs are created for every $1 billion in exports. By this estimate, manufacture and export of advanced reactors, nuclear power system components, fuel cycle services and materials, and other components and services for even a relatively small portion of the multitrillion dollar trade in nuclear power could produce tens of thousands of US jobs. If the United States then leads in innovation to expand these markets to include industrial integration of commercial nuclear technologies as we have described, the potential market and associated manufacturing and professional services jobs base could plausibly achieve an export base on the order of $100 billion within a decade or so, with commensurate job growth. This opportunity has not gone unnoticed by economic competitors of the United States. For example, in 2012 the United Kingdom developed a Nuclear Supply Chain Action Plan with similar job-creating objectives, even as its own domestic nuclear industry was flat. Economic opportunity associated with US market position in global nuclear energy should be a key strategic national objective, and innovators in the United States will need mechanisms, such as financing vehicles, modern export policies, and technology development and demonstration support, that match the global trade reality of today—not of decades past—to successfully engage the global markets.

Engagement and leadership in the global commercial nuclear energy market will be key to a cost-effective future for US domestic nuclear reactor construction. As previously summarized, new reactor construction in the United States is inefficient and too expensive. This may in part reflect a focus on large gigawatt-sized systems in markets not demanding many of them, and in part a long-term atrophying of the supply chain, including nuclear construction experience. The domestic market alone likely will not provide sufficient opportunity in the short-term to exercise such supply chains to the point of efficiency. But the global market will. Developing international partnerships, transnational supply agreements, and strategic partnerships based on national specialization will enable US suppliers to dominate in portions of the supply chain where they are most competitive, and partner for other portions. This type of global focus can create a path to achieving deployment costs in the United States that approach global averages, and it may offer opportunities for new technology offerings from US companies that are on the near horizon, such as small modular reactors and micro-reactors, and associated supply chains.

US engagement internationally is still important to nuclear security and safety, but it is just one lever in a broader field of credible policy and engagement options. International engagement and innovation by the United States has been and will likely continue to be influential in assuring that nuclear energy technologies are used for peaceful purposes and maintain the highest safety standards. International agreements and nuclear export protocols are as important as ever, but they require a fresh examination of present export rules and constraints, financing policy for export industries, and other policies that affect the ability of US industry to compete for global export markets. Current export rules, including those associated with Section 123 of Atomic Energy Act and the Department of Energy’s Part 810 regulations, grew out of a much different time and competitive and strategic context. The rise of other technology suppliers with a strong stake in the global market, such as Korea, China, and Russia, requires that the United States consider other levers to influence norms and practices for assuring nonproliferation and safety. These might include financial, trade, and multilateral partnership mechanisms not related to nuclear trade. Nonproliferation-focused regulations need to be placed in a current-day context that balances economic and strategic benefit with economic and strategic cost. Export control policies may no longer be the most effective mechanisms for pursing nonproliferation objectives.

National nuclear infrastructures have been and will continue to be critical national assets for the United States. Past investments in nuclear research and operations infrastructures, tied in part to commercial reactor and fuel cycle research and development and including talent, academic programs, materials, and facilities, are also important strategic assets for the future. They provide national capabilities to respond to nuclear accidents, monitor and assess proliferation threats, support domestic defense industry supply chains, and provide training and education that is applied across a range of energy, health, manufacturing, and other industries. A stable industry, coupled with a steady investment in advanced nuclear research, development, and demonstration (in physical science, engineering, and manufacturing) tied to commercial nuclear deployment, would help exercise and enhance these capabilities and therefore provide some national security and strategic benefit, including workforce resilience and capacity that benefits a range of industries.

Simply stated, nuclear technology is very different from other energy technologies and carries substantial strategic implications. The potential costs of continued erosion of national technical and human resource infrastructure related to nuclear technologies should be closely evaluated, and if possible quantified, and the implications considered as national policy is developed.

Geopolitical advantage is as much a part of the nuclear technology equation as ever. Sixty years ago, the geopolitical factor driving the United States to engage internationally in nuclear energy and technology was principally tied to nonproliferation concerns. Today, geopolitical considerations also should factor into the benefits of prolonged presence and regional influence that come from supplying nuclear technology to other nations. Nuclear systems are generally designed to operate in excess of 60 years, during which time fuel cycle services, professional and operational services, upgrades, and other needs are generally met by the supplier nation. The Russian enterprise Rosatom, for example, is successfully marketing its build-own-operate business model to Turkey. This approach includes technical, service, and financing packages. The result of such arrangements is substantial, long-term influence of the supplier country on the buyer. These bilateral relationships tend to deepen over time through expanded applications in academic, industrial, and service sectors, such as medicine. The strategic benefits of such prolonged engagements for the United States, and the costs of ceding them to competitors, ought to be a major consideration in US nuclear energy activity and policy. Such influence can be gained through engagement in the broad nuclear supply chain; not simply through new reactor sales.

The strategic implications for the United States of loss of leadership in international nuclear trade are troubling. The potential benefits to reasserting leadership may be compelling, and the global market appears to offer real opportunities for doing so.

Other observers have proposed a range of actions to reestablish US leadership in global commercial nuclear energy and address the nation’s strategic interests, including modernization of export policy, investment in advanced nuclear research, creation of financial mechanisms to better compete with state-owned foreign competitors, and expansion of workforce training programs. But any serious effort to identify and assess the necessary policy actions must build first on an in-depth assessment of potential US competitiveness in global nuclear energy markets.

To date, no such assessment exists, but it could be conducted by an appropriate interagency working group inside the government, or by an independent external organization such as the National Academies. This assessment must consider the entire global supply chain (not only reactor development and sales), including US competitiveness and potential prospects in various links of that supply chain, and should then define required changes to national programs and investment strategies that will enable emerging US commercial innovators to compete in international markets and provide a strategic national return in the context of the principles we have outlined here. Development of such a national supply chain strategy would be the first logical step in implementing a clear, outcome-focused investment and policy approach that can guide both public and private-sector strategies to take US leadership in commercial nuclear energy into the next 60 years.

Knee-Capping Excellence

This past spring, the Trump administration’s fiscal year 2018 budget had little good news for the nation’s biomedical research enterprise. Prominent among areas targeted for deep cuts, the National Institutes of Health (NIH) faced a threatened 22% reduction in its funding—$7.7 billion less in appropriations than the previous year.

Such a draconian action was dismissed upon arrival on Capitol Hill, and steps were taken to shore up overall NIH funding. But buried within the administration’s accompanying budget documents, and receiving far less attention, was a seemingly arcane change in the way NIH supports extramural research, by capping grant funding for indirect costs—also known as facilities and administrative (F&A) costs—at 10% of total research costs.

Longtime supporters of biomedical research in the House and Senate understood the significance of the proposed cap and acted quickly to try to block it. A temporary measure to block the cap was enacted in September and may be extended for the balance of this fiscal year. But the potential remains for unilateral action by the administration to cap or cut indirect cost recovery over the longer term, and there are discussions in Washington about the parameters of federal funding for university-based biomedical research generally and indirect cost recovery in particular.

The sheer magnitude of the research funds at stake under the proposed 10% cap underscores the consequential nature of the issue and signals a misunderstanding of the ways in which indirect costs are essential to the conduct of biomedical research. A 10% cap would reduce by almost two-thirds the amount of funding that research university sponsors would receive to offset indirect costs on federal research grants. In the absence of ready sources of alternative revenue to make up this shortfall, such a cut could not help but force universities to contract dramatically the overall level of research activity conducted on our campuses.

Here I explore the history, rationale, and criticism of the recovery of indirect costs. I focus on NIH funding because the Trump administration targeted that agency in its proposal. Yet, I recognize that similar effects could be felt across multiple federal granting agencies, such as the National Science Foundation (NSF) and Department of Energy, that rely on NIH-approved rates in funding comparable research. I find that the indirect cost reimbursement formula is an essential component of the biomedical research partnership between the federal government and universities. And I show that the proposed cut by the administration would have a severe impact on the financial capacity of universities to continue to support federally funded research activity.

A valuable partnership

The compact between the federal government and the nation’s research universities in biomedical research is rooted in Vannevar Bush’s seminal report on US scientific research—Science, The Endless Frontier. To support the nation’s burgeoning post-World War II scientific needs, Bush urged the federal government to provide funds to research universities for basic research. Bush argued that these universities could best provide for the “free, untrammeled study” and risk-taking critical to discovery, that it would be impractical for government to re-create the laboratories that already existed on campuses, and that the “traditional sources of support” for academic research would be insufficient without federal support.

Bush’s recommendations led to the creation of a deep partnership between the federal government on one hand and universities (and other institutional sponsors) on the other in which the government would co-invest with these institutions in scientific research. (The federal government also distributes research funds to nonacademic research institutes, private industry, and other stakeholders, but the vast majority of funding goes to research universities, and so that is my focus here.) Universities would build and nurture the research ecosystem and contribute a portion of their own funds to support federally sponsored research, and the federal government would allocate research funds on a competitive and meritocratic basis to their scientists.

From the start, this partnership has been financed through the reimbursement of direct and indirect costs, both allocated from the same pool of appropriated research funding.

Direct costs refer to those research costs that are incurred by the principal investigator specifically for the research proposed in the grant application, such as the salaries and stipends of scientists, the cost of lab supplies and equipment, and travel for conducting the research or sharing the results.

Indirect or F&A costs refer to those research costs that are incurred by the university across multiple research projects, such as the construction and maintenance of laboratories; secure data storage and high-speed data processing; utilities such as ventilation, heat, and lighting; libraries and other research facilities; radiation and chemical safety and hazardous waste disposal; the administrative personnel to support the research and ensure compliance with safety and other rules; and advanced technology and lab equipment that can be optimized for repeated use across many grants. Because indirect costs compensate universities for facilities and services that support a number of different investigators, indirect costs are allocated on a proportionate basis across shared infrastructure and personnel.

For more than six decades, the federal government has been reimbursing universities for indirect costs alongside direct costs. In 1966, the federal government came to adopt the modern system of compensating those indirect costs, in which colleges and universities are reimbursed through negotiated rates that are tailored to each institution and its particular expenses and type of research. From the outset, the government has underscored that it would calculate rates using a principle of “cost-sharing” rather than full cost reimbursement.

In practice, F&A rates are calculated as a percentage of the amount awarded for direct research costs (not as a percentage of the overall grant), and universities are expected to deploy their own funds to close the substantial remaining balance. Currently, the average amount paid to universities for indirect research costs on a federal grant is approximately 25%-33% of the total amount of the grant, and the underlying F&A rates vary by university (up to about 65% of direct costs). Research universities that conduct a great deal of biomedical research are generally at the upper end due to higher costs involved in providing and overseeing biomedical research facilities. To further complicate matters, these rates include within them a cap of 26% on allowable administrative (but not facilities) costs, which the Office of Management and Budget (OMB) imposed on universities in 1991 in response to incidents involving misuse of federal funds at several institutions.

As a result, unlike other funding arrangements between the federal government and private entities—such as contracts or grants with industry partners that typically cover all related costs under a full reimbursement model—here the federal government reimburses universities for indirect costs only up to a prescribed maximum, and the sponsoring university is responsible for all research costs incurred above that level. In 2015 alone, universities contributed $4.9 billion of their own funds to support the indirect costs of federally sponsored research, at the same time that the overall federal portion of research costs declined (from 61% in 2010 to 55% in 2015).

Flawed rationales

Supporters of the Trump administration’s proposed cut to indirect cost recovery have offered a handful of arguments in its defense. Some have suggested that indirect cost reimbursement creates an incentive on the part of universities to overbuild new research facilities. Others have asserted that it leads to the over-hiring of administrators at universities. Still others have claimed that universities receive less in indirect cost recovery from foundations and other nonprofit funding partners than they demand from the federal government. Although the precise character of these arguments varies, their proponents generally see in current NIH grant funding a potential for misuse or overpayment.

In considering these concerns, it is necessary to acknowledge the elaborate oversight systems that have been developed over the years to temper the incentives for abuse. For instance, the federal government carefully defines what can and cannot be considered as an indirect cost, and how those costs are to be calculated by academic institutions. Every three to four years, each research university faces a comprehensive assessment led by either the Department of Defense’s Office of Naval Research or the Department of Health and Human Services to evaluate and negotiate the indirect cost rate to be allowed on federal grants. The formal process for establishing each university’s rate can include requests for additional data and campus site reviews for interviews and equipment audits.

Two types of accounting audits are also employed, one of a university’s financial statements and the other of a specific grant or project. Should an audit, or other source, expose misuse, federal rules and processes provide for the imposition of sanctions and other punishments under state and federal law.

Moreover, since universities bear the full up-front costs of investment in the labs and research facilities that support scientific activities, and since universities are fully responsible for indirect costs incurred above the prescribed ceiling, they face clear incentives to refrain from wasteful spending and constrain costs. Specifically, if a university expands its infrastructure or organization beyond what is justified in light of its anticipated grant awards, the existing administrative cap and rate-setting process prevent that misplaced bet from being externalized to the federal government. It must be borne by the institution.

As the federal investment in biomedical research has declined in real dollars in recent years, the exposure of universities to unrecovered investments in research infrastructure has only increased. It is noteworthy in this regard that, according to a 1996 Arthur Andersen study, universities at least to that point incurred lower indirect costs in conducting federal research than did industry or federal laboratories.

Recently published studies challenge the arguments made in defense of the Trump administration’s cut to indirect cost recovery. In Issues in 2015, Arthur Bienenstock, Ann Arvin, and David Korn analyzed public data on biomedical research facilities at academic institutions and called into question claims of overbuilding of biomedical research space. A 2015 Demos study by Robert Hiltonsmith found that the number of executives and administrative personnel per student at public research universities has actually declined since 1990. And although the American Institutes for Research found that the number of “professional staff” (including research personnel) has increased slightly at public and private universities, so too has the amount of actual research occurring at universities; hence it is unclear how this increase in and of itself is evidence of bloat.

Finally, although the Trump administration’s budget asserts that NIH is paying more to universities for indirect costs than do private foundations, and specifically references the Gates Foundation, this misses the fact that the federal government and NIH use different rules than private foundations to delineate direct from indirect costs. A 2017 apples-to-apples comparison published by the American Association of Universities demonstrated that the federal government and philanthropic foundations compensate “a similar percentage of the total funding” for the expenses that constitute federal indirect costs. In any event, if philanthropy in fact does not pay adequate indirect cost rates for the research they support, that only enhances the importance of the federal government’s role in supporting the infrastructure of research.

This discussion does not mean that there is no scope for increased efficiencies in the way in which the federally funded biomedical research system is conducted. That would be both naïve and disingenuous, and in fact a number of academic medical institutions have taken steps to reduce costs in recent years, in areas ranging from procurement to energy management to organizational efficiency. Rather, my argument is that the various critiques of indirect cost reimbursements have failed to provide any systematic evidence of abuse and waste on a scale that would justify the two-thirds reduction in cost reimbursements entailed by the adoption of the cap.

Devastating effect

According to the administration’s own supporting budget materials, the proposed cap would have led to more than a $4.6 billion reduction in funding in fiscal year 2018 alone. There is no credible evidence that universities would be able to cover a funding reduction of this magnitude through hitherto unrealized cost savings. (Recall that universities are already incented to adopt efficiencies in order to reduce their institutional support for funded research.) Further, given the remarkably tight financial margins under which most universities operate, it is quixotic to imagine that the large pools of undesignated funds required to make up for lost federal funding actually exist. Even if institutional leaders were willing to consider reallocating existing funds to cover the contraction in federal research funding, they would still confront substantial legal, moral, and political constraints on using endowment or tuition revenues for this purpose.

Unavoidably then, universities would be forced to reduce the amount of federally funded research activity that is conducted on their campuses. As noted, institutions tend to have different indirect cost recovery rates to start, and so the precise level of the contraction in research would vary across institutions. But no matter the university, the scale of the reductions needed would not be trivial. A study recently conducted by the economic consulting firm Charles River Associates for Johns Hopkins University found that if the university were unable to reduce the portion of research expense attributed to indirect costs—and if it were not able to find alternate funding sources to compensate for the loss in federal funds—then the imposition of the cap would force a nearly three-fourths reduction in its federally funded research portfolio.

As a result, it is conceivable that if the proposed cap on reimbursement of indirect costs went into effect, pressure would be placed on OMB to allow expenses that are now considered administrative costs to be allocated to the direct cost line. Unless the overall appropriated funding were left entirely intact—which seems unlikely given the administration’s proposed cuts to the NIH budget more broadly—a world in which the government cuts the recovery of indirect costs and then allows those cuts to come out of direct costs is of course no better than a world in which direct costs are cut themselves. If all of the $4.6 billion in question came out of direct costs, the reduction in NIH direct research funding to scientists at universities and other institutions would be about 28%.

In all, the proposed cap would result in a staggering blow to the nation’s vital interest. Universities would be forced to retrench by downscaling a research enterprise that has been a vital force in advancing discovery and human health. The impact might fall most heavily on early career investigators, who find themselves on fewer and smaller grants and are among the most vulnerable to funding contractions. The economic consequences of these cuts would also reverberate across the United States, confined not only to the biomedical and pharmaceutical sectors, but affecting the many upstream and downstream industries that are connected to them, and the jobs and communities they support. Simply, the proposal would amount to a deep and inextricable cut to the private-public partnership at the foundation of the nation’s biomedical research enterprise.

Opening the Books

Academic health centers are complex ecosystems. They typically have a medical school at the core and one or more major teaching hospitals, often complemented by a Veterans Administration medical center and community hospitals and clinics. Other health professions schools are also often affiliated or included. The centers’ missions of education, research, technology innovation, and clinical care are often viewed as trade-offs, but should be viewed, I believe, as synergies.

The University of California, San Francisco (UCSF), the subject of Follow the Money: Funding Research in a Large Academic Health Center, has become one of the nation’s leading academic health centers. Its medical, nursing, dental, and pharmacy schools are among the most highly ranked and highest-funded research-oriented institutions in each category. UCSF is unique among campuses of the University of California system in being limited to these schools; all of the other components of a comprehensive research university (including the School of Public Health) are across San Francisco Bay at the venerable University of California, Berkeley, campus. Berkeley has none of the UCSF schools, although the university system does have additional medical schools at the Los Angeles, San Diego, Irvine, Davis, and Riverside campuses.

Two UCSF veterans, Henry Bourne, a professor emeritus of cell and molecular pharmacology, and Eric Vermilion, a retired vice chancellor of finance, collaborated on this book. Bourne previously authored Paths to Innovation: Discovering Recombinant DNA, Oncogenes, and Prions, In One Medical School, Over One Decade, published in 2011, which celebrates the UCSF’s faculty and its academic environment. The current book explores the university’s financial structures, investigating the sources of revenue, nature of expenditures, and especially funds transfers between units. This analysis is a tried-and-true approach to understanding the priorities, decision-making processes, resources, and financial risks of an institution. Hence Follow the Money is an appropriate title for this book.

Longitudinal analyses are commonly performed over a few years or even over decades, as in this case. Always, the choice of the base year and the recognition of major external stressors are critical. This book is focused primarily on funding for biomedical research, including research training for PhDs and MDs, with minimal attention to the many reimbursement issues affecting the now-vast clinical enterprise. The metrics of success for an academic health center such as UCSF involve long-term vision, investments, and scientific and health impacts.

The authors clearly state their concerns in the introduction, as they refer to “the extreme fragility of academic biomedical research,” “sharp cuts in state government support,” and stagnant federal research funding. The book’s overall tone is quite pessimistic, despite the status, strengths, and resilience of UCSF. The authors long for the time several decades ago when the state paid the faculty salaries and the construction costs of a much, much smaller institution. They also claim that medical schools at private universities and other state universities may have more secure funding from endowments or appropriations, which is dubious.

The authors have three aims. First, they present a primer for people, especially those working within UCSF or sister institutions, who want to learn how and where research dollars flow in such complex enterprises and may be bewildered by the hybrid of academic research and education tied to a huge clinical enterprise. Second, they examine how the internal distribution of resources guides and constrains investigators’ goals and training of young scientists in basic science units and clinical departments. Third, they assess the prospects for UCSF’s research enterprise and recommend financial strategies to enhance those prospects. Their analysis benefitted from the openness of UCSF leaders and the transparency of public financial reports by the University of California. They expect, as do I, that the analysis is relevant to dozens of other large academic biomedical institutions.

The authors present the history of this remarkable institution in broad sketches, punctuated by major events, including the 1991 California recession, the 1999 dot.com bust, the 2008 mortgage-based economic collapse, and periodic stresses in health care funding from the state and from the federal government through the National Institutes of Health (NIH). UCSF’s circumstances as of fiscal year 2014 are the fulcrum for the book: in that year, the university had 23,000 employees, 2,000 faculty members, and 5,600 students and trainees. It brought in revenues of $4.45 billion, including $992 million in federal, state, and private grants and in contracts for research. (This research funding accounted for 22% of the $4 billion total and 48% of the $2.1 billion “campus,” or nonclinical, revenues.) Since 1984, the teaching hospitals have grown in monetary value from $135 million to $2.4 billion (5.9 times, inflation-adjusted), while the campus grew 2.1-fold. State appropriations, while continuing, did not keep pace, of course; they declined sharply as a percentage of the total and when adjusted for inflation.

Bourne and Vermillion address the critical matter of indirect cost recovery, also known as overhead. These include expenses associated with operating and maintaining facilities, the salaries of administrative personnel, and costs outside those directly related to research. Overhead for federal grants was capped at 26% of direct costs for administration plus 31% (after audit) for facilities and services, including debt service and depreciation, for a total of 57% in 2014. These rates are much less than the actual research support expenditures, and nonfederal grants cover even less; overall, indirect cost recovery at UCSF is 41%. At comprehensive research universities, unlike UCSF, there is often a struggle between researchers funded primarily by NIH (which pays indirect cost recovery on top of approved budgets) and those funded by the National Science Foundation, Department of Defense, and Department of Energy (which include indirect costs in their total grant awards, to be shared by faculty and the institution).

Following the money into UCSF’s basic science departments, organized research units, and clinical departments, Bourne and Vermillion find that clinical departments have much higher research revenue per square foot (a common metric for efficient use of space across different kinds of research), higher grant income per faculty member, and higher salaries. They imply that the standards for promotion of faculty may be lower for those with high grant support. Meanwhile, the NIH cap on 100% salary rate at $185,000 means the institution must cover the shortfall from precious other sources for highly paid faculty.

The authors are particularly anxious about the future for the basic science departments. Surprisingly, they exclude the 14 basic science laboratories funded by the Howard Hughes Medical Institute (plus four more in the clinical departments). These well-funded labs do exacerbate the have/have-less dichotomy, but they represent an enormous strength at UCSF. The authors’ description of the recent move from the earthquake-vulnerable, densely populated Mt. Parnassus campus to the relatively new site at Mission Bay sounds smooth. In fact, the move was quite traumatic at the time, with most basic science faculty initially declining to follow the inspirational lead of Vice Chancellor Keith Yamamoto and the Department of Cell and Molecular Pharmacology. In contrast to the authors’ concerns about research funding, the organized research units, starting with the Cardiovascular Research Institute, are powerhouse entities in the translational research space and are magnets for philanthropy.

Bourne and Vermillion note that basic science faculty no longer do much teaching of medical students, as they focus on their research and research trainees. This is a national phenomenon, as medical schools rely on physician-scientists in response to student demand for clinical relevance in teaching the basic science courses, and it reflects the overall decline of lectures in the learning environment.

The many laments about stagnant or declining NIH buying power since 2004 represent a real stress for medical schools. But these complaints ignore the rapid doubling of the NIH budget from 1998 to 2003 and the windfall from stimulus funding in 2009 and 2010. Support for NIH remains strong in the US Congress, especially among champions who currently chair the House and Senate Appropriations subcommittees. There were overwhelming bipartisan votes in favor of the 21st Century Cures Act after the election in 2016, and then Congress rejected the Trump administration’s proposals for deep cuts in the 2017 appropriation for NIH and for cuts in indirect cost recovery. Clearly, NIH grantees are at risk for 2018, as is funding for health care generally; advocates are pressing for Congress to continue robust investments in biomedical research.

The book concludes with Bourne and Vermillion’s recommendations that the UCSF leadership secure substantial new financial resources, especially for the basic sciences, from philanthropy and from further expansion of the clinical enterprise. They specifically recommend a general endowment fund, a basic science division, special funding to provide partial salaries for selected faculty, assistance for young faculty, and silo-bridging research initiatives. UCSF has implemented some of these suggestions already. In fact, a one-page addendum as the book went to press notes that the school has established a Chancellor’s Fund to support faculty research, a Physician-Scientist Scholar Program for young faculty researchers (similar to the Biological Sciences Scholars Program operating at the University of Michigan since 1998), enhancement of the model for basic science funding, and expanded housing in the Mission Bay area for students and junior faculty.

UCSF has done remarkably well since 2014. As public documents show, UCSF revenues grew to $5.45 billion in 2015 for the whole enterprise, including $3.26 billion from clinical revenue (UCSF Health) and $1.19 billion from grants and contracts (an increase of 20%), including substantial amounts from industry. However, clinical margins are slim, even with better insurance coverage under the Affordable Care Act and Medicaid. Indirect cost recovery on grants and contracts remains limited, and the institution must cover major shortfalls on overhead from nonfederal grants. Meanwhile, UCSF has received several very large gifts and announced major building projects.

In the end, the authors have met their aims, showing how revenues, expenditures, operating margins, and especially internal transfers can reveal the cultures, priorities, vulnerabilities, and even jealousies of a splendid and transparent institution. Others in similar circumstances around the country can learn from Bourne and Vermillion’s book. While respecting their concerns, I expect that UCSF and other leading institutions will be more resilient than the authors fear.

Forum – Winter 2018

What drives innovation?

In “What Does Innovation Today Tell Us about the US Economy Tomorrow?” (Issues, Fall 2017), Jeffrey Funk starts with an assertion that puzzles me, but after that he develops and provides evidence for a point of view that is quite consistent with my knowledge. He asserts early on that most scholars of innovation see new scientific knowledge as generally providing the focus and capability for successful efforts to develop new products and processes—what is often called the linear model. However, though many years ago the linear model was widely believed in the innovation research community, it has been almost completely abandoned over the past quarter century, a bit too sweepingly in my view since it is not a bad first approximation to much of what is going on in a few fields, particularly biomedicine.

But after that assertion, Funk makes it clear that he does not hold to that theory, and he spends most of his article providing evidence that knocks it down. His report on the referencing to science in patents taken out by a collection of successful start-ups is a useful contribution to a wide range of empirical evidence we now have that in most industries and technologies, invention generally does not rely on new science. And Funk’s data, as that in other studies, show biotech as something of an exception.

Funk argues that in most fields of technology what he calls the Silicon Valley model fits what is going on in innovation much better than the linear model, and I think most scholars of innovation would agree. In these fields, most particular innovations are incremental, but the effort is cumulative and progress can be very rapid. Funk makes a point that progress at any stage often is driven by recent advances in component technologies and materials, again an argument that is consistent with a number of other empirical studies. He gives us some very nice examples.

Toward the close of his article Funk comes out agreeing with Robert Gordon and Tyler Cohen that in recent years rapid technological advance has been occurring in only a few sectors, those drawing on biotech and those involved in information processing being the most important ones. He takes the position that an important route toward broadening the innovation that is going on is to take steps to make research done at universities more focused on explicitly creating the bases for technological advance in areas where the need is great. For reasons that he surely knows but does not go into, this is a very controversial argument. But he will have many people who strongly agree with him.

Richard R. Nelson

Director, Program on Science, Technology, and Global Development

Columbia Earth Institute

Columbia University

Jeffrey Funk’s essay contains informative vignettes about the contributions of science to technological innovation in selected industries and about technological innovations in other industries that are formed instead by what the economist W. Brian Arthur has called “fresh combinations of what already exists” and are essentially independent of scientific advances. Other than perhaps eliding the case of technological advances serving as essential building blocks to scientific advance, these vignettes add to but largely restate well-known propositions in studies of causal linkages between scientific discoveries and technological innovation.

The essay’s efforts to relate its vignettes to national science and technology policy issues are hampered, however, by an overdone reliance on a stylized dichotomy between a linear, science-based, model and technology-built (Silicon Valley) models of innovation. Its presentation and interpretation of data on the percentage of patents citing scientific and engineering publications to “identify the parts of the US innovation system that are working well and those that are not” also are unconvincing.

The crux of the argument is that the frequency of citations of publications in patents is a measure of the importance of the science-based model. Thus, as Funk reported in Table 1, low percentages to the Billion Dollar Startup Club are presented as indicating little contribution of science to economic growth. Note needs to be made here of the difference between this finding and that of Francis Narin and colleagues, who in 1997 used patent citations to publications in highly regarded journals to demonstrate that “public science plays an essential role in supporting U.S. industry, across all the science-linked areas of industry, amongst companies large and small, and is a fundamental pillar of the advance of U.S. technology.” Relationships clearly may have changed over the past 20 years. But the more likely explanation for the difference is Funk’s limited and uncritical use of data. The data in Table 1 are largely consistent with a priori expectations about which industries would rely on patents for protection of intellectual property and which would not (biotechnology and e-commerce, respectively), and relatedly on knowledge embedded in scientific papers. In Funk’s interpretation, no allowance is made for differences in an industry’s reliance on patents that are, say, relative to trade secrets to establish intellectual property rights or to the relative importance of different patents within a firm’s patent portfolio.

The issue of the relative mix of mission-oriented and science-oriented investments addressed by the essay is of longstanding importance. In 1987, likely the nadir of US international competitiveness in traditional manufacturing sectors and the peak of concern about declining leadership in scientific and technological endeavors, the economist Henry Ergas introduced the distinction between mission-oriented and diffusion-oriented national research strategies. He described the former as “big science deployed to meet big problems” and the latter as “policies that seek to provide a broadly based capacity for adjusting to technological change throughout the industrial structure.”

Much of US science and technology policy from the early 1980s on, encompassing the Bayh-Dole Act, the Stevenson-Wydler Act, the National Cooperative Research and Production Act, and the Small Business Innovation Development Act, as well as agency-specific initiatives such as the National Science Foundation’s funding of Engineering Research Centers, may be seen as experiments designed to foster a more mission-oriented cast to federal investments in research and development (R&D). Some of these policy initiatives have worked well, others less so.

Against the backdrop of projected near-term austerity in real levels of federal R&D funding, it is not clear what Funk has in mind when he contends that “US policy makers should be moving more of the nation’s R&D investment toward a mission-based approach, and they should be experimenting with different approaches to implementation.” What type of investment/performer/societal objective (other than economic growth) funding mechanism will bear the opportunity costs of reduced funding? Without providing specific answers to these questions, depending on one’s choice of metaphors, moving in the direction of his recommendation is either opening a black box or a Pandora’s box.

Irwin Feller

Professor Emeritus of Economics

Penn State University

Jeffrey Funk draws attention to two perspectives on innovation and offers a potential remedy for improving the current innovation approach. The first perspective distinguishes between the optimists’ and pessimists’ views of the economic impact of the current state of innovation in the United States. The second distinguishes between science-based (science-focused) and Silicon Valley-based (mission-focused) innovation. Funk postulates that Silicon Valley innovation is not as dependent on basic or applied science, and that it commercializes faster but penetrates smaller sections of the economy. He proposes speeding science-based innovation and focusing on critical areas for its application—merging academic research with mission-based goals for society. Before addressing this solution, I want to consider the issues with Funk’s assessment of the situation.

Funk’s observations and examples reveal the difficulty of defining, measuring, and tracking innovation, major problems in evaluating innovation’s effectiveness, economic penetration, and upstream and downstream effects. Researchers such as Funk lack time series data (data points indexed in time order) or other relevant data, relying instead on a series of individual surveys, personal anecdotes, and inconsistent methodologies. The assumptions made about data quality and compatibility result in measurement and forecasts that fail to provide policy-makers with the appropriate information to make critical decisions.

Better insights can be achieved with indicators that more comprehensively measure the multiple facets of innovation. Changing the measure of the nation’s gross domestic product to include R&D, computer software and databases, entertainment, and literary and artistic originals as investments rather than expenses, as proposed by the economist Marissa J. Crawford in 2014, would be a step in the right direction. Since companies expect their current spending in these investments to generate future returns and investors consider them in assessing the firm’s market value (versus book value), the investments are indicative of the value of the economy. Broadening the definition of innovation to include investments in design, branding, new financial products, organizational capital, and firm-provided training and other intangibles would provide a more challenging but improved measurement.

Broadening the definition of innovation beyond commercialization would include many additional activities and outputs. For example, currently unmeasured is what the economist Eric von Hippel has called “free innovation,” where innovators have a specific and often personal need to create new products or processes and to make them available to all. Free innovation takes many forms, from medical devices to sports equipment to open source software. A comprehensive measure of innovation would include these free innovations.

These new measures can take advantage of new ways to collect data, such as opportunity data from crowdsourcing and the internet. These new sources of data would supplement the current survey and national accounts measures and provide new insights into current measures.

Funk’s solution to improving the economic impact of innovation is also problematic. If as he proposes innovation must be made more mission-focused, who will decide the missions, the critical areas for supporting research, and the mission-based goals? How will the decisions be made? And what happens to funding for pure basic research? Although tools can be created without basic science research—even cave dwellers and crows have done it—basic science is the feedstock for improving those tools. How does society prevent this mission approach from removing the feedstock for future improvement?

As Funk suggests, there is a need for an improved link between innovation and its economic impact. Better measurement tools and approaches are needed to assess the total economic impact of innovation before a more mission-focused strategy can arguably improve economic impact.

Sallie Keller

Professor of Statistics

Stephanie Shipp

Research Professor

Virginia Tech

I largely agree with Jeffrey Funk’s analysis and his prescriptions for improving the yield of academic research projects. He makes a strong separation between science-based innovation and the Silicon Valley process of technology change. And he is correctly critical of the linear model. But I think he would do well to celebrate the positive role that technology-driven innovation plays in providing new challenges that advance science. For example, the invention of the transistor came from technology-driven needs to replace vacuum tubes, later leading to the discovery of the transistor effect, which earned the inventors a Nobel Prize in Physics. There are many such examples, but linear model advocates often rewrite history to favor their misguided model.

I like Funk’s analyses of the MIT Technology Review predictions and his claims that large economic impacts are more likely to come from technology-based innovations. I agree with his recommendations that tying scientific research more closely to national priorities and mission-driven projects would be helpful and that a slightly more centralized approach would be beneficial, as is emerging with the Engineering Research Centers sponsored by the National Science Foundation and the Manufacturing.gov partnerships. Of course, there should always be room for blue-sky explorations and theory-driven science.

One concern is that Funk appears to believe that change can come only through top-down government policy shifts, but bottom-up changes can happen from individual researchers, laboratory directors, and campus leaders who recognize the paths to high-impact research by working more closely with business, government, and nongovernmental organization partners. There is evidence that both paths to change are happening, so more articles such as this are helpful to accelerate these changes that will lead to better research that has higher societal impact.

Ben Shneiderman

Professor of Computer Science

University of Maryland

Funding highways

As John Paul Helveston says in “Navigating an Uncertain Future for US Roads” (Issues, Fall 2017), highway finance in the United States is “broken and broke.” It is ill suited to dealing with the three emerging revolutions in passenger transportation—electrification, automation, and shared mobility. Although the current method of taxing gasoline (and diesel) is becoming increasingly anachronistic, it remains a very simple and highly efficient method for collecting revenue: less than 1% of the money taken in is spent on collection and administration.

Helveston’s preferred alternative, a tax on vehicle miles traveled (VMT), has many attractions. It relies on in-vehicle transponders and GPS to monitor congestion and location, and it can be fine-tuned to address equity, congestion management, and environmental goals. However, the adoption of VMT taxes will likely face political headwinds at least as strong as increased gas taxes will. In addition, it is much more expensive to administer, consuming over 6% of the revenue (equivalent to about $300 million per year at today’s tax rates).

As academics, we endorse Helveston’s enthusiasm for a VMT tax. But if all of the equity, congestion, and environmental benefits of VMT taxes are to be realized, their adoption would have to be done largely by local and state governments. The national government will not and should not determine how to tax single-occupant versus pooled vehicles (such as Lyft Line and Uberpool), roadway tolls, and the use of automated cars, to name just a few road taxation options. VMT taxes will and should be largely a local prerogative and action. Moreover, there are various other approaches that could face weaker political headwinds and be well suited to the task of financing roads and steering the three transportation revolutions to the public interest.

Perhaps the simplest option is to modify the gas tax to be an energy tax. The tax would be administered not just at the gas pump but, in the case of electricity, by the electric utility. This “energy equivalent” tax would impose a fee based on energy content and can take into account vehicle efficiency. The tax could be easily adjusted to meet local infrastructure needs. And it would be cheap to administer. It solves the challenges posed by the electric vehicle revolution.

Other less sophisticated approaches can address the challenges of congestion management and incentivize the use of pooled and automated vehicles (and disincentivize individually owned unshared automated cars). These include providing increased access to high-occupancy toll lanes, designating preferential parking for pooled cars (with higher rates for single-occupant vehicles), or even banning low-occupancy vehicles in certain areas. All of these approaches could be adjusted to ease the burden on disadvantaged travelers.

VMT fees are an elegant solution to the broken, anachronistic gas tax of today, and deserve support. But we should not let the good get in the way of the perfect. Let us continue to be creative and sensitive to local priorities. Let us encourage many flowers to bloom.

Daniel Sperling

Professor and Director

Alan Jenn

Research Scientist

Institute of Transportation Studies

University of California, Davis

Before considering the thesis of John Paul Helveston’s article, it’s useful to review some physics. Moving an object requires energy for overcoming inertia and friction. The amount of energy equals the quantity of work done, conditional on the efficiency of energy conversion. Transportation is work, and the energy used is proportional to the work done. Taxing energy means that bigger, heavier vehicles pay more than smaller, lighter vehicles, something a VMT tax doesn’t do until a system to discriminate among vehicles is added. It’s physics that makes fuel taxes, as Helveston notes, “nearly impossible to avoid and easy to collect.” And taxing energy creates an economic incentive to improve energy efficiency.

The problem with taxing vehicle energy use is political, not practical. The motor fuel taxation system is in trouble, but it’s been in the same predicament several times before and been repaired. Historically, the threats to motor fuel taxes have been (in order of importance) inflation, fuel economy, and, a distant third, alternative fuels. Motor fuel taxes are excise taxes, so inflation erodes their value, which would equally be a problem for a VMT tax. The solution is conceptually simple (index to inflation) but politically difficult. To address increasing fuel economy, an energy tax can also be indexed to the average energy efficiency of all vehicles on the road (also simple but politically challenging). And today, every alternative fuel is taxed except electricity.

To mitigate climate change, we must urgently improve energy efficiency and reduce carbon dioxide emissions, making this the wrong time to abolish a tax on transportation energy use. In 2016, transportation became the largest source of carbon dioxide emissions in the US economy. Today, fossil petroleum still supplies 92% of transportation’s energy, with most of the rest coming from corn-based ethanol, whose greenhouse gas emissions are, arguably, not a lot better. Because of this, vehicle energy taxes are effectively a tax on carbon emissions and thus a meaningful incentive to improve energy efficiency. VMT taxes could be structured to mimic these environmental benefits, but then tax rates would also need to be periodically adjusted to offset increased fuel economy.

Taxing VMT is better for addressing congestion and vehicles’ cost responsibility. Congestion pricing must be time- and place-specific, and will almost certainly be regressive. Although energy use increases linearly with mass, damage to pavements and bridges increases exponentially. Taxing heavy-duty vehicles’ VMT could work much better than the current system of ad hoc taxes. Why not add targeted VMT taxes to a universal vehicle energy use tax?

As Helveston notes, plug-in electric vehicles comprise less than 1% of new vehicle sales and an even smaller fraction of vehicles on the road. By 2025 electricity is likely to comprise no more than a few percent of vehicle energy use. In the meantime, we can tax electric vehicles. In the future, smart grids could tax their energy use. But we shouldn’t be in a hurry to abolish a useful tax on energy.

David L. Greene

Senior Fellow, Howard H. Baker Jr. Center for Public Policy

Research Professor, Civil and Environmental Engineering

University of Tennessee, Knoxville

Geoengineering ethics

In “Character and Religion in Climate Engineering” (Issues, Fall 2017), Forrest Clingerman, Kevin O’Brien, and Thomas Ackermann provide a neglected, interesting, and potentially valuable perspective on how to deal with high-stakes technology and policy choices related to climate engineering. Their approach is particularly bracing as a counterweight to the widespread but often overlooked presumption of interest-based rationalism that underlies many discussions of public policy, climate engineering, and other matters.

Their argument targets one of the central problems of climate engineering. Any decisions about climate engineering interventions (including refusal to authorize them) would have global consequences, but would also involve unavoidable delegation of authority to some kind of international body, whether political or technical or some blend of the two. Delegation requires some degree of trust. But in global decisions, in which participating values are widely diverse and mechanisms for democratic accountability are at best imperfect, what could provide the basis for establishing such trust? The authors’ answer is to abstract a few fundamental character traits, or virtues, that are consistently articulated across the world’s major religious traditions—accountability, humility, and justice—and propose that decisions should manifest these virtues.

As a short list of virtues you would want incorporated in policy decisions, this is a good one, although I would suggest one change. Justice is an odd fit with the other two, because it is more a property of collective political outcomes than of individual character, and because many proposed ways to operationalize it would appear to miss the authors’ target. For example, views of justice that stress procedural fairness seem irrelevant to their aim, while views that highlight expansive protection of property rights might act against their aim. I would propose replacing justice with compassion, particularly in its concern for the suffering of the worst off and most vulnerable populations. Like the authors’ proposed guiding virtues, it is near-universal across religious traditions, and it might more squarely target their concern.

But this is a small objection, almost a quibble. I see three more serious challenges to deriving useful guidance from the authors’ proposed virtues.

First, there has not been a close correspondence between individual character and political decisions since the decline of absolute sovereigns. Contemporary policy decisions are made not by individuals, but by complex bureaucratic and political networks. In such systems, the challenges to making the identified virtues operational, or even influential, are substantial. Indeed, it often appears that political institutions are more effective at aggregating and empowering vice (such as greed, lust for power, delusion) than virtue, even when their ostensible guiding principles are virtues. Multiple examples attest that even explicitly religious institutions, principles, and commitments are not exempt from this generalization when they move, or are moved, into the sphere of temporal or political action. Consider religious justifications of extremist violence, the Catholic hierarchy’s decades-long evasions over sexual abuse of children, or the entrainment of much of contemporary American evangelical Protestantism into the political agenda of the Republican Party. My aim here it not to take cheap shots against religions or religious organizations, nor to reject the authors’ aspiration, but merely to note the acute, unresolved challenges to realizing their aims in the context of high-stakes political action.

Second, the implications of the authors’ proposed virtues for action are frustratingly vague. When any action has diffuse, far-reaching potential consequences, exhorting decision-makers to take account of consequences is surely better advice than the contrary. But this exhortation provides no guidance on what consequences to consider, how many steps removed from the initial action, mediated by what processes (including other people’s decisions), or how to think about them. Similarly, exhortations to humility are clearly proper, if the alternative is rigid confidence in a single view of technical capabilities and consequences. But when any action—or for that matter inaction—leads to multiple linked uncertainties, it is unclear what additional guidance humility provides. Perhaps it overlaps with prudence or precaution, but then what additional guidance does it give? And if the additional guidance is for extreme precaution when dealing with novel acts such as climate engineering, then humility, like accountability, risks becoming a comprehensive prescription for inaction, thus further entrenching the status quo and its associated risks—in this case, continued climate change and impacts, limited only by whatever reductions can be achieved by mitigation.

Finally, the authors propose to apply their character-based framework to climate engineering—and in particular to climate engineering research—but do not say why these activities, rather than other technologies, policy decisions, or research areas, should be subject to such heightened scrutiny. For potential future operational decisions about climate engineering, heightened scrutiny clearly makes sense. Given their global impacts and high stakes, we would surely hope these decisions are made with accountability, humility, and consideration for the most vulnerable. But this seems even more evident for other areas of current research and technology development, such as synthetic biology and artificial intelligence, that are racing ahead with little such scrutiny. Saying “What about these other technologies” does not, of course, rebut the case for subjecting climate engineering to such scrutiny. But it does raise questions about the limited application of such a character-based perspective, and the oddity of advocating such application for a set of potential technologies not yet in development, indeed scarcely researched, yet already subject to exhaustive, hostile scrutiny.

The application of this heightened scrutiny to research is particularly strange. Although concerns have been raised that research on climate engineering will inevitably lead down a slippery slope to thoughtless deployment whether justified or not, the basis for such claims is weak. Yet the authors appear to presume this will happen, by holding research to account for all harms that might follow from deployment. The link from research to deployment cannot be completely dismissed, but there is little basis for judging it a serious risk. History is littered with technologies researched and developed but not deployed. Moreover, strong restrictions on or aggressive scrutiny of research may act against the authors’ aims for accountability and humility. Because expanded research is needed to advance understanding of potential consequences and risks, strong restrictions on research—particularly if proponents of the research are required to surmount a burden of showing no harm can come from it—would hinder the attempt to gain knowledge about consequences and risks that is necessary to support an informed stance of accountability or humility, except insofar as these are construed as implying categorical rejection of climate engineering deployment or research, under all conditions. I suspect this is not what the authors intended.

Edward A. Parson

Dan and Rae Emmett Professor of Environmental Law

UCLA School of Law

Forrest Clingerman, Kevin J. O’Brien, and Thomas P. Ackermann make a cogent case for the inclusion of religious thought—particularly character ethics—in discussions of solar radiation management and carbon capture and storage. They advocate responsibility, humility, and justice as character traits that may be supported by religious thought and applicable to those making decisions about climate engineering. Though this is unobjectionable, it also misses the mark regarding the most dynamic and useful insights that religions bring to the conversation.

Climate engineering is ontologically disorienting because it clearly places human agency into a position of power over a global entity—the climate—that has never before been deliberately manipulated at this scale. Before the policy community or the public at large is ready to discuss whether climate engineers are sufficiently virtuous, we must determine what our new and rather frightening abilities mean about who we are and where we’re going as a species. We need to reckon with the philosophically shallow but emotionally provocative concern that humans may be “playing God.” Until we can truly accept the implications of the fact that humans are responsible for accidental climate change, we will not be ready to ethically evaluate the proposal to deliberately change the climate.

Religious thought has many relevant insights: Shall we reenvision the role of the human in creation? Are we dominators, stewards, caretakers, partners, priests, kin? Ought we repent of complicity in climate change, ask forgiveness, seek reconciliation? Are humans alone in these decisions, or might we seek wisdom from ancient traditions, a transcendent and wise deity, or even (acknowledging that not all humans have equally caused these problems) from those most affected by climate change? These ontological questions precede and underlie questions of ethics and character. (And several contributions to Clingerman and O’Brien’s recent book, Theological and Ethical Perspectives on Climate Engineering, touch on them.)

The authors correctly insist that scientific proposals for climate engineering should also include moral and social considerations. Some scientists may find this requirement cumbersome. But the philosopher Bernard Rollin, in Science and Ethics, recognizes that if scientists wish to maintain professional autonomy, “they must be closely attuned in an anticipatory way to changes and tendencies in social ethics and adjust their behavior to them, else they can be shackled by unnecessarily draconian restriction.”

Rollin astutely observes that “any major new technology will create a lacuna in social and ethical thought in direct proportion to its novelty.” He worries that this lacuna, if not filled well, will lead to “bad ethics.” For example, he notes that within one week of the cloning of Dolly the sheep in 1997, a poll indicated that 75% of the US public believed that cloning the sheep had “violated” God’s will. If would-be climate engineers wish to avoid a backlash comparable to that against genetic modification of organisms, they would do well to take Clingerman, O’Brien, and Ackermann’s advice by proactively engaging ethics and religion sooner rather than later.

Laura M. Hartman

Assistant Professor, Environmental Studies

University of Wisconsin, Oshkosh

Forrest Clingerman, Kevin O’Brien, and Thomas Ackermann discuss three important character attributes that humanity will need to manage the climate of the Earth: responsibility, humility, and justice. I think the first two will be more important than the last. It’s easy to forget that we are not talking about geoengineering to steer away from our current climate. We are talking about geoengineering when things become really bad and most of the world, if not the entire world, is suffering badly. Responsible action will mean making things better for nearly everyone while at the same time never giving into the hubris that we can have total control. Justice is important, but any responsible action should mean that we are quite certain the intervention will help pretty much everyone.

Being responsible, humble, and just may be its own reward, but motivating publics will likely require more than admonitions. This deeply pessimistic time for many who study the climate problem means these scientists fear that the awful truth—as they understand it—would cause people to just give up. We must also think about attracting people to a future world and giving them reason to be, if not optimistic, at least willfully hopeful about the future. Geoengineering holds some promise for developing hope. Geoengineers need hopefulness.

To act with intention as a geoengineer implies designing interventions to achieve specific outcomes. Definition of goals will surely include the notion that everyone should have enough to eat, sufficient water and shelter, clean air to breathe, and so on. Some parts of the Earth should be dedicated to animals or perhaps re-wilded. Hope may grow as people, or at least groups of people, come together over these goals. Exploration of geoengineering presents the opportunity to learn about effective management of the environment of the planet and to take pleasure in doing a good job in that process.

But people will also likely want or need a simpler clarifying way to think about what humanity is trying to achieve on our home planet, and to some extent this may simply involve beauty. Finding beauty in aspects of the world that humans have engineered will likely require a lot more common knowledge and working appreciation about how the world of the Anthropogene (the time during which human activity has significantly altered the natural environment) works. If you don’t understand how the world works, how can you understand and appreciate what an intervention does? I would guess that most Americans no longer know as much about where their water and food come from as they did in the past. To many, water comes from the tap, food from the store. The ebb and flow of seasons and what to expect as they change don’t really show up on television or smartphones. As part of the challenge, geoengineers will also have to become communicators and explainers of the wonders of this world and the pleasures and aesthetics of stewardship. And they will have to know an awful lot.

Jane C. S. Long

Oakland, California

Forrest Clingerman, Kevin O’Brien, and Thomas Ackerman call for dialogue between geoengineering researchers and religion (or religion scholars). Religions provide numerous resources for ethical deliberation. They offer distinct vocabularies, concepts, and narratives for framing problems and evaluating possible solutions. As Clingerman and coauthors point out, reflection on geoengineering ought to engage us in discussions of the moral character of researchers. Desirable character traits include justice, responsibility, and humility. As a rule of thumb, no one who aspires to assume the role of God by controlling the world’s climate should be trusted with the technologies to do so.

Some researchers, I imagine, might scoff at alarmist warnings about playing God. Do scientists genuinely harbor divine aspirations, or is this merely the stuff of cinematic depictions of mad scientists and sensationalized news headlines?

In my view, the article’s authors are quite right to stress issues of character, particularly in an epoch that, formally or otherwise, we are naming after ourselves. As they argue, everyone stands to benefit from better acquaintance with religious worldviews and their distinctive moral contributions. But we must also understand that religion—broadly construed—has already framed these and other debates about technology. It is not simply a matter of inserting religion into a conversation where it is absent, in other words. Religious myths, motifs, vocabularies, and aspirations have long taken up residence in our discourse about science and technology. Recognizing this, scientists and others might learn to use religion’s resources responsibly, to reflect more deeply on the marriage of religion and technology that already exists.

Geoengineering strategies present a moral hazard: technological adaptation to climate change may perpetuate avoidance of responsibility rather than force us to address underlying causes—character flaws—that created the crisis. If we can remake the world to suit ourselves and our preferred mode of existence, we can persist in the denial of human and natural limits. Depending on how we deploy them, mythic and religious vocabularies can encourage or critique avoidance of responsibility. They can foster aggrandizement of ourselves as creators, or they can serve as humble reminders of our creaturely status.

Appropriation of religious language in a techno-scientific milieu is rampant—particularly when stakes are high or the achievements unprecedented. We see such appropriation among would-be geoengineers insisting that “we are as gods and we might as well get good at it.” We find it among champions of de-extinction who christen their work “The Lazarus Project.” Atomic scientists of the past century who likened themselves to Hindu deities, or spoke of scientists “knowing sin” for the first time, were telling us something. When engineers dream of interstellar travel and attach mythic names to their visions—”Star Ark” or “Icarus”—we should pay attention.

The first astronauts to land on the moon read aloud from the Genesis stories of creation. Were they praising God’s creation? Perhaps, inspired by their God’s-eye view, they were affirming the extension of humans’ God-given dominion well beyond Earth. Who can say?

Religion scholars can say. Or at least, they can provide intelligent analysis. We need more dialogue between the research community and religion. And we also need to understand what is already being communicated and why.

Lisa H. Sideris

Associate Professor of Religious Studies

Indiana University

Religion and science

I read with interest the essays and personal views discussing the various possible relations one can imagine between science and religion. I learned a lot about the personal life of Jamie Zvirzdin and how she was educated among the Mormons while being fascinated by astronomy and the sciences in general. Kristin Johnson’s thoughts on how the personal beliefs of scientists are affected by the death of their sons and daughters are also of interest and confirm what is already well known: that individual scientists can always find ways to make knowledge and their religious beliefs compatible. And Dinty Moore’s conversations with “real” Americans provide enlightenment about how they perceive the “supposed” divide between science and religion.

As someone who tries to elevate the level of discourse on this recurrent debate about the relations between science and religion, I am struck by the fact that the main reason it has been a dialogue of the deaf for the past quarter of century is that very few of the protagonists take the time to define the terms “science” and “religion.” For before debating whether “science and religion go hand in hand,” as young Isaac Mills assured Moore, or asking “why can’t the two views simply coexist,” it should stand to reason that the persons who partake in the discussion should first make sure that they are talking about well-defined categories and that they put the same things under those names.

It is thus unfortunate to observe again that none of the contributions take the time to tell us what they mean by science and by religion. In case some readers think it is obvious and need no such pedantic talk about definition, I will simply recall that there are certainly differences between religion, faith, and spirituality, for example. Hence, Moore tells us that he explores the idea that “faith and rationality can coexist.” If we know that faith can obviously be argued rationally on the basis of some postulate, one can only agree with such a statement. But is rationality synonymous with science? Of course not, and the fact that one can find good reasons to believe in some invisible gods—for that could indeed explain bizarre things such as evil or our very existence—does not mean that it has anything to do with science.

In fact, the first thing to do to get rid of the confused language that dominates this ill-defined debate is to clearly distinguish between the individual and the social-institutional levels. Hence a “religion” refers to a social organization that promotes a set of principles, beliefs, and rules of behavior defined either by a sacred book or an oral tradition said to have its origins in a particular god. Beliefs and spirituality do not have to be linked to a formal religion and can be very idiosyncratic. Thus the members of the Mills family presented by Moore are said to be “devout evangelical Christians.” They are thus part of an official religion and, as such, follow the rules it defined in order to remain part of that community. Now, science is also a social institution that constitutes a community on the basis of a collective practice that methodically searches to explain the world (material, living, social, and so on) in terms of natural causes. Science is thus a sort of game with its own rules based on observation, experimentation, calculation, and rational argumentation. By its very definition it excludes supernatural explanation, since such an explanation is always possible and thus explains nothing.

Once we clearly define religions and science as different social institutions, it becomes clear that particular individuals can believe whatever they want as long as they obey the rules of the scientific game and do not invoke “miracle” or “god’s action” to explain a given phenomena. Said differently: science is collective and social, whereas religious beliefs are private and personal. Science and beliefs are thus on different planes. Conflict will occur only when a given religion, as a social organization, wants to limit the freedom of scientific research or to object—without using the same scientific method—that this or that scientific fact cannot be so. It is the social force of institutionalized religions that explains the many well-known historical conflicts that have emerged since the seventeenth century and led to various exclusions of scientists from religious communities and to the condemnation of many books by the Catholic Church. Now that such institutions have lost their temporal (as opposed to spiritual) power, conflicts appear at more local levels when social groups want to impose their views on the larger communities, as witnessed in the debates going on in some US schools about the teaching of evolution in science courses.

Since the Templeton Foundation provides the money to the project “Think Write Publish: Science and Religion,” it is to be hoped that the various essays that come out of this enterprise will go beyond the actual confusion of language, which can serve nobody except if one really thinks that confusion can serve the interest of religions. Being that most religious people hold their belief on sincere faith, no religious believers should be afraid of the most robust results established by the scientific community using its sophisticated method of naturalistic explanations. As Brother Marie-Victorin, a member of Brothers of the Christian Schools and a noted botanist in Quebec, wrote in 1926, “science and religion follow parallel paths, toward their own goals,” a position that also echoes Cardinal John Henry Newman who said in 1855 that “theology and Science, whether in their respective ideas, or again in their own actual fields, on the whole, are incommunicable, incapable of collision, and needing, at most to be connected, never to be reconciled.” And if one absolutely wants to “harmonize” a given religion with the actual state of science, then it is only necessary to adapt the principles and beliefs of the former to make it compatible with the latter, for science as a social institution cannot be constrained in its freedom by any of the many existing religions. Finally, one should at some point ask this important but neglected question: why do some religions want to “dialogue” with science if the former are about the supernatural world, while the latter is about the natural world?

Yves Gingras

Canada Research Chair in History and Sociology of Science

Université du Québec à Montréal

Science police

In a response to my article “The Science Police” (Issues, Summer 2017), Stephan Lewandowsky, James Risbey, and Naomi Oreskes write: “Keith Kloor alleges that self-appointed sheriffs in the scientific community are censoring or preventing research showing that the risks from climate change are low or manageable.”

That is a complete mischaracterization. My article discussed how political considerations have influenced two high-profile disciplines: conservation biology and climate science. I delved into the experiences of well-regarded researchers who have been affected by unusual efforts to “police” their work and of those who pushed back on such efforts. I did not discuss any scientific research that even hinted—nor did I imply—that “the risks from climate change are low or manageable,” as Lewandowsky et al. suggest in their framing of my article.

As I wrote, Lewandowsky and colleagues published several papers that seemed intended “to foreclose certain lines of scientific inquiry,” such as the study of natural variability. The letter writers assert that this is a “fabricated claim” by me. I can rebut this by simply pointing to the reaction of highly respected climate scientists in 2015, after a related paper published by Lewandowsky et al. appeared in the journal Global Environmental Change. I did not “fabricate” how climate scientists reacted to this paper; I reported on it.

For example, coverage of the Lewandowsky paper by the British newspaper the Guardian included one article headlined “Are climate scientists cowed by sceptics?” Peter Thorne, a professor of physical geography (climate science) at Maynooth University in Ireland, left this comment at the Guardian: “To maintain that we as scientists should not investigate the pause/hiatus/slowdown [there, I used the phrase…] is downright disingenuous and dangerous.” And Richard Betts, a climate scientist and head of the climate impacts strategic area at the United Kingdom’s Met Office, left a similar comment and also wrote an extensive rebuttal to Lewandowsky et al., which I referenced in my article.

In short, my article correctly captured how numerous climate scientists felt after some climate communication scholars suggested that recent research on natural variability was prompted by climate contrarians, when in fact it was a continuation of a long line of inquiry.

Keith Kloor

Freelance journalist

Adjunct Professor of Journalism

New York University

What Does Innovation Today Tell Us About the US Economy Tomorrow?

How does the future of technological innovation look for the United States economy? Experts disagree. Techno-optimists such as Erik Brynjolfsson, Andrew McAfee, Martin Ford, and Ray Kurzweil claim that there are endless opportunities arising from continuing advances in computing power, artificial intelligence, and other areas of science, and that the main challenge for policy makers is to prevent mass unemployment in the face of rapid and disruptive future technological change. Pessimists such as Robert Gordon and Tyler Cowen point to slowing productivity growth and rising health and education costs as evidence that the future contributions of innovation to the economy may be weak, and that policies will be needed to promote faster growth. Is there a way to objectively address this disagreement?

Using data on innovations by start-up companies and research activities by universities, I have analyzed the types of innovations that have recently been successfully commercialized in order to identify the parts of the US innovation system that are working well and those that are not. In doing so, I distinguish between two long-term processes of technology change from which new products and services become economically feasible. The predominant viewpoint among innovation analysts is that advances in science (that is, new explanations for physical or artificial phenomena) form the basis of new product concepts and facilitate improvements in the performance and cost of the resulting technologies. Sometimes called the linear model of innovation, here I will call it the science-based process of technology change. In a second process, rapid improvements in existing technologies—such as integrated circuits, displays, smartphones, and Internet speed and cost—enable new forms of higher-level products and services to emerge; I call this the Silicon Valley process of technology change. It is largely ignored by the academic literature on innovation.

In the science-based innovation process, basic research illuminates new explanations and applied research uses these explanations to improve the performance and cost of science-based technologies such as carbon nanotubes, superconductors, quantum dot solar cells, and organic transistors. In the Silicon Valley process, the emergence of e-commerce, social networking, smartphones, tablet computers, and ride sharing did not directly depend on advances in science, nor did improvements in their overall design. Instead, the performance and cost of these technologies became economically feasible through continual, generally incremental improvements in technological performance.

In making the distinction between these two different processes of technological innovation, I acknowledge that advances in science were indirectly necessary for new forms of products and services to emerge from the Silicon Valley process of technology change. These advances enabled rapid improvements in integrated circuits, magnetic storage, fiber optics, lasers, light-emitting diodes (LEDs), and lithium ion batteries. Without these advances, improvements would have been slower and fewer electronic products, computers, and Internet services and content would have emerged. The distinction is useful, however, because it helps to better understand the types of innovations that have recently been successfully commercialized, and thus which parts of the US innovation system are working better than others. Such understanding, as we shall see, has useful implications for rethinking policies that are mostly informed by a science-based understanding of innovation.

The science-based process of technology change

The predominant view of the sources of innovation is that advances in science—new explanations of natural or artificial phenomena—play a key role in economic growth because they facilitate the creation and demonstration of new concepts and inventions. New explanations of physical or artificial phenomena such as PN junctions, optical amplification, electro-luminescence, photovoltaics, and light modulation emerged from basic research and formed the basis for new concepts such as transistors, lasers, LEDs, solar cells, and liquid crystal displays, respectively. Older examples of science-based technologies include vacuum tubes and radio and television, and more recent examples include biotech products and the technologies that I discuss below. Biotechnology depends a great deal on advances in science because a better understanding of both human biology and drug design are needed for drugs to provide value.

Advances in science can also facilitate rapid improvements in the cost and performance of new technologies, including pre-commercialization improvements, because they help engineers and scientists find better product and process designs. The early pre-commercialization improvements are typically classified as applied research and the subsequent ones are typically classified as development, where advances in science play an important role in identifying the new product and process designs in both applied research and development. My study of 13 science-based technologies, published in Research Policy in 2015, found that one key design change was the creation of new materials that better exploit relevant physical phenomena. New materials enabled most of the rapid improvements in the performance and cost of organic transistors, solar cells, and displays; of quantum dot solar cells and displays; and of quantum computers. This is because the new materials better exploited the physical phenomena for which these technologies and their concepts depended. Advances in science helped scientists and engineers search for, identify, and create these new materials because the advances illuminated the relevant physical phenomena.

For new forms of integrated circuits such as superconducting Josephson junctions and resistive RAM (random access memory), reductions in the scale of specific dimensions enabled most of the improvements, and these reductions were facilitated by advances in science. Most people are familiar with the reductions in scale that enable Moore’s Law (the observation that the number of transistors on a microprocessor doubles every 18 to 24 months). Just as conventional integrated circuits such as microprocessors and memory benefit from reductions in the scale of transistors and memory cells, respectively, resistive RAM benefits from smaller memory cells and superconducting Josephson junctions benefit from reductions in the scale of their active elements. And finding new designs that have smaller scale is facilitated by advances in science that help designers understand the various design trade-offs that emerge as dimensions are made smaller.

These types of examples demonstrate how advances in science can both facilitate improvements in new technologies and enable new concepts, and they are one reason science receives large support from federal, state, and local governments. Universities are the largest beneficiaries of this support, and they are expected to do the basic and applied research that can be translated into new products and services by the private sector. But how many technologies are emerging from the science-based process of technology change?

Recent science-based innovations

There are many possible ways to measure the number of science-based technologies that have recently been successfully commercialized. Depending on the definition of “recently,” one could list post-World War II examples such as those mentioned above. However, in addition to the relatively old age of these examples, one would also like to have a relatively unbiased database to measure the recent emergence of science-based technologies. One way to produce such a database is to analyze patents obtained by successful start-ups to see how many of these patents cite scientific and engineering papers. Consider companies that are members of the Wall Street Journal‘s “Billion Dollar Startup Club,” most of which were founded between 2002 and 2012. They are global start-ups that have billion-dollar valuations, are still private, have raised money in the past four years, and have at least one venture-capital firm as an investor.

Table 1 shows the percentage of start-ups by the total numbers of science and engineering papers mentioned in their patents. Not only are well-known science-based technologies such as nanotechnology, quantum dots, superconductors, and quantum computers, or new forms of integrated circuits, displays, solar cells, and batteries, not represented in Table 1, only eight (6%) of the 143 start-ups cited more than 10 different scientific papers in their patents, and six of them (5%) are biotech and bio-electronic start-ups. The importance of advances in science to biotech start-ups is not surprising. Ninety percent of royalty income for the top 10 universities comes from biotechnology, and universities obtain a larger percentage of the patents awarded in biotech (about 9%) than for all other high-tech sectors (about 2%).

Funk fig 1

Some observers might argue that members of the billion-dollar start-up club probably licensed patents from other firms and thus are utilizing more ideas from scientific papers than are shown in Table 1. However, even doubling the number of papers cited in the patents would not significantly change the results, and these increases in paper numbers might not even equal the number of papers typically added by patent examiners. Even those papers cited by start-ups were mostly from engineering journals and not pure science journals such as Nature and Science. And when patents did cite previous information, they cited practitioner magazines, books, and blogs more than science and engineering papers, suggesting that most entrepreneurs are looking for information outside of science and engineering journals.

The small number of start-ups in the billion-dollar club that depend on advances in science or mention science and engineering papers in their patents is probably not surprising to most people in the private sector. It is well known that few academic papers are read and that patents are not relevant for most businesses. One study found that only 11% of e-commerce firms had applied for even a single patent as of 2012, as compared with 65% and 62%, respectively, for semiconductor and biotech firms. The percentage applying for patents is of course lower than the percentage receiving patents, and most of the e-commerce patent applications involve business models and few or none involve science-based patents. This provides further evidence that advances in science play a small role in successful start-ups such as the billion-dollar club members.

A second type of evidence supporting the case that few science-based technologies have recently been successfully commercialized comes from my analysis of predicted breakthrough technologies by MIT Technology Review between 2001 and 2005. Because these predictions reflect research activities, the market sizes provide insights into whether research done at leading universities in the 1990s and 2000s has become the basis for new products and services. As shown in Table 2, one predicted breakthrough (data mining) has greater than $100 billion in sales; three have between $10 billion and $50 billion (power grid control, biometrics, distributed storage); one has sales between $5 billion and $10 billion (micro-photonics); six have sales between $1 billion and $5 billion; eight have sales between $100 million and $1 billion; and 14 have sales of less than $100 million. (Data for seven could not be found.)

Funk fig 2

In comparison with other recently successful technologies that were not chosen by Technology Review (and for which there were no markets when the magazine made its predictions), the sizes of the markets for most of the predictions are very small. My own nonsystematic survey of a variety of high-tech industries—based on more than 20 years of working with hundreds of students and companies, as well as on reading Science, Nature, the Wall Street Journal, and websites on the year’s best new products, services, and technologies—identified three technologies with markets greater than $100 billion in 2015 (smartphones, cloud computing, Internet of Things), one with between $50 billion and $100 billion (tablet computers), and four (social networking, fintech, eBooks, wearable computing) with between $10 billion and $50 billion (see Table 3). In other words, eight technologies had greater than $10 billion in sales as compared with only four of Technology Review‘s predictions. Thus, the predictions do not capture a lot of what makes it big in the marketplace, and most of the predicted technologies do not end up having much economic importance.

The second and related interesting thing about the magazine’s predictions is that most of them are science-based technologies, a finding that should not be surprising since high-end research universities such as the Massachusetts Institute of Technology (MIT), which publishes Technology Review, build their reputations in part on claims that basic science performed by their faculty make critical contributions to technological advance. The numbers of papers published in top science and engineering journals and the number of times these papers are cited are standard measures of support for such claims. It is only natural that MIT and other high-end universities would emphasize science-based technologies when they predict breakthrough technologies and when they look for opportunities in general.

Funk fig 3

Support for this interpretation comes from the names of the breakthrough technologies chosen by Technology Review. Many of them sound more like research disciplines than products or services: metabolomics, T-rays (terahertz rays), RNAi interference, glycomics, synthetic biology, quantum wires, quantum cryptography, robot design, and universal memory. Contrast the names of these predicted breakthrough technologies with successful ones missed by the magazine (such as smartphones, cloud computing, Internet of Things, tablet computers, social networking, fintech, and eBooks) and the differences between science-based and Silicon Valley innovation become more apparent. The emergence and evolution of research disciplines represent an early step in the science-based process of technology change, one that often occurs in the university setting, but this step cannot easily be extrapolated to marketplace impact, as shown by the modest performance of most of the Technology Review predictions.

Furthermore, a focus on the science-based process of technology change may also be a major reason why market data were not found for seven of the predictions: mechatronics, enviromatics, software assurance, universal translation, Bayesian machine learning, untangling code, and bacterial factories. Most of these terms refer to broad sets of techniques that existed long before the magazine made its predictions, and thus they are more consistent with research disciplines than technologies that might form the basis for new products and services.

The Silicon Valley process of technology change

The Silicon Valley process of technology change represents a different process by which technologies become economically feasible. Rapid improvements in integrated circuits, lasers, photo-sensors, computers, the Internet, and smartphones enabled new forms of products, services, and systems to emerge in which Moore’s Law is the most well-known contributor to these rapid improvements. Some economists call these technologies “general purpose” technologies because they have a large impact on many economic sectors.

For example, consider the iPhone, whose design and architecture did not directly depend on advances in basic or applied science or on publications in academic journals; it can also be defined as a general purpose technology. My 2017 paper in Industrial & Corporate Change quantitatively examines how improvements in microprocessors, flash memory, and displays enabled the iPhone, Apple’s app store, and specific types of apps to become economically feasible. Certain performance and price points for flash memory were necessary before iPhones could store enough apps (along with pictures, videos, and music) to make Apple’s app store feasible. Certain levels of microprocessor price and performance were needed before the iPhone could quickly and inexpensively process 3G signals when apps or content were downloaded. Even iPhone touch screen displays represent recent improvements (a new touch-screen layer) in an overall trajectory for LCD displays. By enabling the iPhone and app store to become economically feasible, these improvements also enabled a broad number of app services to become feasible, such as ride sharing, room booking (Airbnb), food delivery, mobile messaging (WhatsApp), and music services.

The impact of rapid improvements in Internet speed and cost on the emergence of new products and services has also been large. For example, these improvements changed the economics of placing images, videos, and objects (for example, Flash files) on web pages. In the late 1990s, web pages could not include them because downloading them was too expensive and time-consuming for users. But as improvements in Internet speed and cost occurred, for both wireline and wireless services, most websites added images, video, and objects, thus enabling aesthetically pleasing pages and the sale of items (such as fashion and furniture) that require high-quality graphics. Rapid improvements in Internet speed have also enabled cloud computing, Big Data services, and new forms of software including more complex forms of advertising, pricing, and recommendation techniques.

These types of rapid improvements and their resulting economic impact on new products, services, and systems represent a different process of technology change than does the science-based process. The Silicon Valley process of technology change primarily involves private firms, the resulting new products and services do not directly depend on advances in basic or applied science, and the ideas for them are not published in science and engineering journals.

Recent innovations from Silicon Valley

The Silicon Valley process of technology change has been critically important for the US economy. Improvements in microprocessors and memory chips (which benefited from Moore’s Law) led to technologies that include mini-computers in the 1960s; digital watches, calculators, and PCs in the 1970s; laptops, cellular phones, and game consoles in the 1980s; set-top boxes, web browsers, digital cameras, and PDAs in the 1990s; and MP3 players, e-book readers, digital TVs, smartphones, and tablet computers in the 2000s. Many new types of content and services have also emerged from improvements in Internet speed in the 1990s and 2000s, including music streaming and downloads, video streaming and downloads, cloud computing, software-as-a service, online games, social networking, and online education.

To analyze more recent examples, consider again the Wall Street Journal‘s billion-dollar start-up club. As discussed above, few of the start-ups’ patents cited scientific and engineering papers, and most of the cited papers were in engineering and not pure science journals. Evidence that many of the opportunities emerged from the Silicon Valley process of technology change can be seen in the large number of Internet-related start-ups in Table 1. Of the 143 start-ups, 119 offer Internet-related services; this roster includes 41 software, 26 e-commerce, 37 consumer Internet, and 15 fintech start-ups. Although advances in science are still important for parts of the Internet, such as photonics and lasers, by the 1990s the overall design of the Internet did not depend on these advances, and the emergence of new services depended mostly on improvements in a combination of Internet speed and cost and on new access devices such as smartphones. For example, certain levels of Internet speed and cost were necessary before cloud computing and Big Data services became economically feasible, and these two technologies form the basis for most of the software and fintech start-ups in Table 1. Also, most of the e-commerce start-ups in Table 1 (such as apparel sites) couldn’t successfully sell their products over the Internet until inexpensive and fast Internet services enabled the use of aesthetically pleasing and high-resolution images, videos, and flash content beginning in the mid-2000s. And many of the consumer Internet start-ups are apps that became feasible as inexpensive iPhones and Android phones became widely available.

Of course, the large number of Internet-related start-ups exploited by the billion-dollar club members should not be surprising since the Internet has long been a target for start-ups, peaking in 2001 during the so-called Internet bubble, just when Technology Review began predicting breakthrough technologies. Since 2001, according to various venture capital analyses, such as Dow Jones Venture Source 2016 and Wilmer-Hale’s 2016 Venture Capital Report, the fraction of total start-ups represented by life science (including bio-pharmaceutical, medical devices, and Internet-related ones) has increased slightly both in terms of financing announcements and financing dollars. Nevertheless, even in 2015, only 22% of start-ups were classified as health care-related financings, while 71% were Internet and electronic hardware financings. The 71% figure is slightly less than the 83% figure (119/143) for the billion-dollar club, suggesting that the chances of success for Internet firms may be slightly higher than for non-Internet firms.

These data suggest that many more products and services, and much more economic activity, have recently emerged successfully from the Silicon Valley than from the science-based process of technology change. Most of the innovation in the US economy seems to be concentrated in those sectors for which information technology has a large impact, such as computing, communications, entertainment, finance, and logistics, with transportation (ride sharing and driverless vehicles) perhaps joining this group in the near future.

This conclusion is consistent with the pessimistic assessments of the US economy offered by the economists Robert Gordon and Tyler Cowen. Both argue that recent technological change has been concentrated in a few sectors, and both question whether information technology, including Big Data and artificial intelligence, can have a positive impact on a wider group of sectors in the future. Their views reinforce a widely held belief—which I share—that productivity improvements in other sectors require science-based innovations, which in turn means that the United States’ economic future depends on improved processes of science-based technology change. Advances in genetically modified organisms and synthetic food are needed for the food sector; advances in nanomaterials are needed for the housing, automobiles, aircraft, electronics, and other sectors; advances in new types of solar cells (such as quantum dots and Perovskites), fuel cells, batteries, and superconductors are needed for the energy sector; advances in electronics and computing are needed to revive Moore’s Law; and advances in science’s understanding of human biology are needed to extend healthful longevity.

This means that improving the science-based process of technology change is a critical task for the United States and its allies. Not only have relatively few science-based technologies recently been successfully commercialized, but this performance record stands in contrast with the dramatic increases in recent decades in funding for basic research, particularly at universities, that are often justified in terms of their potential to catalyze technological advance. Expenditures from the US government on “basic research” rose from $265 million in 1953 to $38 billion in 2012, or 20 times higher when adjusted for inflation. For basic research at universities and colleges, the expenditures rose by more than 40 times over this period, from $82 million to $24 billion. Dramatically higher expenditures and lower output suggest that this process is not working as well as it should. This conclusion is consistent with other research showing significant declines in the productivity of biomedical innovation—a phenomenon described by one analyst as Eroome’s (that’s Moore’s spelled backward) Law. Furthermore, the greater success of the Silicon Valley process of technology change suggests that the problems with the science-based process lie more in the upstream (university) than in the downstream (private sector) side.

Making science valuable

The previous sections illustrate several problems with the science-based process of technology change. First, most of the output from university research is academic papers, yet few corporate engineers and scientists cite science and engineering papers in patents, or probably even read these papers. The well-known business magnate, engineer, and inventor Elon Musk has publicly said that most academic papers are useless. Others argue that government, environment, and health care professionals also do not read these papers or use them to make policy. Asit Biswas, a member of the World Commission on Water for the 21st Century—who has also been a senior adviser to 19 governments and six heads of United Nations agencies, and brandishes 8,773 citations, an h-index of 39, and meetings with three popes—observed in a recent op-ed: “We know of no senior policymaker or senior business leader who ever read regularly any peer-reviewed papers in well-recognized journals like Nature, Science or Lancet.”

There are probably many reasons why few academic papers are read and used by policy makers and private-sector managers, engineers, and scientists. The biggest reason is probably that their value does not exceed the problems of accessibility and cost, which are quite high for most people. The value is low because criteria used by academic reviewers and private-sector engineers and scientists to evaluate new technologies are very different. Academic reviewers emphasize scientific explanations, elegant mathematics, comprehensive statistics, and full literature reviews while private-sector engineers and scientists emphasize performance and cost.

A second reason is that many pre-commercialization improvements are needed before these technologies can be commercialized, as I showed in my 2015 Research Policy analysis. Although management books prattle on about the shorter time needed to commercialize new technologies than in the past, the low output of science-based technologies along with the productivity slowdown documented by Robert Gordon and Tyler Cowen suggest otherwise. The reality is that these technologies take many decades if not longer to be commercialized, during which time many improvements must be implemented. For example, the phenomena of photovoltaics and electro-luminescence were discovered in the 1840s and early 1900s, respectively, yet the diffusion of solar cells and LEDs has been quite recent even as they experienced very rapid improvements over the past 50 years. If the United States is to increase the rate of commercialization for science-based technologies, these time periods must be reduced through new ways of doing government-sponsored research. How will this be achieved?

One way is to create stronger links between advances in science and the achievement of particular technical or social goals. Such a mission-based approach to research and development (R&D) was central to how, during the Cold War, the US Department of Defense successfully developed fighter aircraft, bombers, jet engines, the atomic bomb, transistors, integrated circuits, magnetic disks, computers, GPS, and the Internet. The department emphasized improvements in cost and specific dimensions of performance in which the quest to improve technological performance often required researchers to devise new scientific explanations. The department still uses this approach in its Defense Advanced Research Projects Agency, which can count drones and driverless vehicles as two recent successes.

A mission-oriented approach to scientific research can be applied to a wide variety of technologies with applications in areas such as health and the environment. In a mission-based approach, decision makers fund technology that can enable measurable improvements in various products and services along with health or environmental outcomes. Crucially, such mission-based R&D must fund multiple approaches and multiple recipients for each technology, while emphasizing reproducible improvements more than academic publications, over time scales that are longer than those typically demanded in the marketplace, but shorter than those typically experienced for science-based technological change. Recipients that provide measurable improvements should be rewarded with more funding, while those that do not are not rewarded.

If a future of slower growth and more inequity predicted by thinkers such as Gordon and Cowan is to be avoided, US policy makers should be moving more of the nation’s R&D investment toward a mission-based approach, and they should also be experimenting with different approaches to implementation. Three leading economists of innovation, David Mowery, Richard Nelson, and Ben Martin, argued recently in a Research Policy paper that an important issue in an R&D system is the “balance between decentralization and centralization in program structure and governance.” A mission-based approach represents more centralization than does the current decentralized system of academics writing papers, but what level of centralization is most appropriate, and how should it be administered? How can scientific creativity be maintained and promoted while also linking it to problem solving? How should technological choices be made, and how should different institutions in academia, government, and the private sector work together in making such choices? If public R&D funds are not going to increase significantly in the next several years, as seems to be the case, how should the proper balance between mission-based approach and the existing decentralized network of professors writing papers be established? Science-based technology’s contributions to economic growth and social problem solving are inadequate. Policy makers need to begin experimenting with new ways to improve its performance.

Editor’s Journal

Science and religion have become opposing pawns in the divisive and ugly political game that mars the United States today. It is only a small oversimplification to suggest that science is increasingly claimed by liberals as their rightful domain, the rational basis for policy making and the foundation of progress, whereas for conservatives, religion provides the moral precepts of a good society and a bulwark against the promiscuous change that can be thrust upon families and communities by scientific and technological advance.

But in a culture—Western culture, today—where science and religion are so often cast as irreconcilable combatants, is it simply too obvious an irony to point out that many of the founding thinkers of the Enlightenment (including Newton and Kepler) were highly devout men? And although it is certainly the case that a much higher proportion of nonscientists (something over 80%) in the United States believe in God than do scientists (something over 30%), don’t the many thousands of scientists who nonetheless are believers falsify the idea that there is a state of inherent conflict between science and religion?

Several years ago, Lee Gutkind, the editor and founder of Creative Nonfiction magazine and my colleague at Arizona State University, and I decided that a culture often divided by putative fault lines between science and religion might benefit from some new and different stories about their interrelations. People come to know the world in part through stories, and many people know the story of Galileo being tossed into prison by the pope, or John Scopes going on trial in Tennessee for teaching evolution—not to mention Adam and Eve being kicked out of Eden simply for the sin of seeking knowledge. But if stories are especially good at making sense of the ambiguities and contradictions of the human condition, where and what are the stories that can communicate a more complex and even fruitful relationship between science and religion?

Several of them are in this edition of Issues. They are part of a bigger project, Think-Write-Publish: Science and Religion (funded through the generous and unfettered support of the John Templeton Foundation), aimed at building a community of storytellers writing true narratives about the generative and harmonious potential emerging at the intersections of science and religion. The first part of the project involved a competitive fellowship where people with great story ideas based in real-world experience would be trained in the craft of creative nonfiction-writing. We selected 14 fellows from a pool of more than 625 applicants. Among them are poets, scientists, priests, a doctor, a philosopher, a nurse, and a procurement manager. After three intensive workshops and nearly a year of writing and revising under the tutelage of experienced narrative-writing mentors, the fellows are now working to get their stories published. You can find out more about them at https://scienceandreligion.thinkwritepublish.org/fellows/.

The second part of the project was a writing competition. We asked for creative nonfiction stories about the ways in which science and religion “productively challenge each other as well as the ways in which they can work together and strengthen one another.” We received more than 200 submissions, and our top two selections, plus one of the two honorable mentions, appear in these pages.

Our first-prize winner is Rachel Wilkinson’s “Search History,” a personal exploration of how Google seems to have become the way many of us seek answers to our deepest questions in today’s world—and what we may therefore have lost in the process. She confesses: “I hate myself for seeking childish things—truth, meaning, the possibility of a loving god. For asking for these things from a mystical series of algorithms. But I want them still.” Wilkinson is a Pittsburgh-based independent writer and editor.

Second prize is for “‘Shuddering Before the Beautiful’: Trains of Thought Across the Mormon Cosmos,” by Jamie Zvirzdin, who teaches in the science writing program at Johns Hopkins University. Her story recounts the dual forces of Mormonism and astronomy in her life, leading up to her decision to leave the church. The story’s final, prayer-like words capture the essence of our project: “In our search for the sublime, may the tracks of science and religion join to seek out the mysteries of the universe and to revolutionize a country in great need of humility and inspiration.”

The Best Panaceas for Heartaches” is our honorable mention, by Kristin Johnson, a professor in the Science, Technology and Society program at the University of Puget Sound. Johnson explores how eighteenth and nineteenth century science and religion reinforced one another to help natural scientists in England cope with what was then the inescapable tragedy of childhood mortality. In this telling, religion is not the source of complacency in the face of suffering, but the motive and the rationale for pursuing the scientific knowledge that could alleviate suffering. Medicines, as Johnson explains, were “God’s gifts, but gifts that would be revealed only through human effort.”

The prize-winning stories are bookended by two wonderful contributions to the project. First, to introduce the science and religion theme, Lee Gutkind and I interviewed the brilliant and humane writer Marilynne Robinson. A bit more introduction to Robinson and the interview can be found at the beginning of the special section. And to wrap things up we have a generous-spirited, funny, and at times surprising piece of narrative reporting from the writer Dinty Moore, who directs the creative writing program at Ohio University and is the editor of Brevity: A Journal of Concise Literary Nonfiction. For his story, “Beyond the Primordial Ooze,” Moore journeyed into the postindustrial heartland of America to discover the many truths of science and religion in the real world. His conclusion? “Science and religion are both fine, as long as we have more bacon.”

And that’s only half of what this edition of Issues has to offer. Two professors of religion, Forrest Clingerman and Kevin O’Brien, working with a professor of atmospheric sciences, Thomas Ackerman, must have been channeling us when they submitted an essay on the topic of religion and geoengineering. Their Perspective provides a perfect proof-of-concept for our science-and-religion project in making the case for how religious traditions provide the necessary foundation for thinking about the qualities of character—responsibility, humility, and justice—that society should seek in deciding whether to engage in the godlike task of engineering the climate in the face of anthropogenic climate changes.

Closer to Issues’ traditional wheelhouse are two important and original contributions to how we should be thinking about science, innovation, and a better future. Jeffrey Funk offers a novel, helpful lens for looking at where most innovations are coming from these days. In his assessment the United States is doing well at Silicon Valley-style innovation that builds on existing technological platforms to generate new ideas, products, and capabilities, but the nation’s ability to link scientific advance to innovation has atrophied and needs a boost—not necessarily in funding, but in seeking a better balance between decentralization of idea generation and strategic, mission-driven research.

Meanwhile, Kenneth Weiss wonders if all that science has learned about genomics over the past couple of decades is raising a question that no one wants to hear: What if the idea of precision medicine, which motivates so much hope and hype around the future of biomedical science, is not possible, even in theory? He worries that the professional, funding, and incentive structure of academic biomedical research makes it all too difficult for science to self-correct in the face of a dominant paradigm on which so many careers and so much political patronage depends.

Finally, if Congress wants to reform the nation’s tax system and rebuild its infrastructure, John Helveston points to the benefits of moving from a fuel tax—which is fast becoming unsustainable in the face of automotive innovations such as electric vehicles—to a “vehicle miles traveled” tax that, unlike the fuel tax, can be tweaked to adjust for technological changes in the vehicle fleet, as well as considerations of equity.

Managing Water Scarcity, or Scarcely Managing?

Water is for Fighting OverJohn Fleck’s Water is for Fighting Over and Other Myths about Water in the West is a valiant effort to make sense of the behavior of water managers and politicians in the western United States. As anyone familiar with the complexities and intrigue of water management will attest, it is hard to imagine how the apparently random set of laws, rules, and policies that govern water in the United States came to be. To the uninitiated the whole system makes no sense. But Fleck gives us the personal stories that bring the regulatory and voluntary components of local and regional water management into focus. He provides a new view of some of the same issues illustrated in Cadillac Desert, Marc Reisner’s classic history of western land development and water policy, without that book’s judgmental overtones.

Water is for Fighting Over is largely about the transition of western rivers from an era of plenty to an era of scarcity. But perhaps its most important contribution is pointing out how much water supply flexibility the nation still has, as evidenced by rising groundwater levels in Albuquerque and Tucson, dramatic reductions in per capita water use rates, massive increases in agricultural efficiency, and small experiments with environmental pulse flows on the Colorado River. That said, Fleck gives considerable play to the point that there is no free lunch: increases in conservation efficiency are often just another form of water reallocation among users, and it is usually either the environment or the downstream users (in this case, Mexico) that suffer the consequences. The fact that there are actually some downsides to water conservation is a point that is frequently missed by people who don’t do their homework the way Fleck did. For example, conserving water by lining the All-American Canal, an aqueduct that brings water from the Colorado River to Southern California’s Imperial Valley and runs parallel to the US border with Mexico, reduced recharge to aquifers and dried up riparian areas and agriculture in Mexico that were previously dependent on seepage from the canal.

Another important component of the book is documenting the contributions of dozens of unsung heroes of water management in the US West. A good example is Terry Fulp, currently the manager of the entire Lower Colorado River for the US Department of the Interior. Though he has always maintained a low profile, his technical expertise, wisdom, and boundless patience with the often-agitated water users of the Lower Basin have been enormous assets in the context of finding solutions to seemingly intractable problems. Another rarely appreciated hero is Jennifer Pitt, who through her work at several nonprofit groups has played a major role in bringing environmental issues to the center of many important conversations regarding the Colorado, despite the relatively limited tools at her disposal.

Although Fleck’s argument is that there are more examples of collaborative solutions in solving the water issues of the US West than there are examples of fighting over water, it is hard to ignore the fact that most of the solutions he describes started as a result of conflict among the basin states, among local and regional water users, or among sectoral interests such as agriculture and cities. What brought people together, in many cases outside of the courts or the halls of government, was almost always a recognition that a better outcome was possible through negotiation than through lawsuits.

In my view, this means that conflict or the anticipation of conflict was a necessary precondition that provided the incentive for compromise or investments in solutions. But other ingredients of success were also necessary: the development of a joint understanding of the facts, leaders with vision and tenacity, and, almost always, people who were willing to innovate instead of following historical precedent. An important point made early in the book, in the context of an excellent story about the political economist Elinor Ostrom on groundwater management in Los Angeles, is that informal networks of players who develop trusted relationships over time are often an ingredient of success.

Fleck does an amazing job of helping us navigate through the arcane intricacies of many of the West’s famous and not-so-famous water stories. It must have been daunting to figure out whose version of each story to believe, to figure out which details to include, and to organize all of the stories, issues, and solutions in a way that makes sense to readers. In fact, the total amount of research involved in writing this book must have been overwhelming, so it is a great tribute to Fleck’s tenacity that it exists at all. But it is excellent: a good read, well organized, and interesting.

Over the course of my own limited engagement with the Colorado River, there has been an unending chorus of people who are convinced that the “Law of the River”—the numerous contracts, laws, court decisions, and regulations that apportion the water and govern the use of the Colorado among the seven basin states and Mexico—is broken. They argue that this governance structure will never survive the realities of drought, climate change, over-allocation, tribal water needs, and the demands of the federal Endangered Species Act, and that it is imperative that a new management regime be developed.

The reality of what has evolved over the past 20 years between the basin states and the federal government is a mutual understanding that negotiated side agreements can relieve some of the pressures on the unwieldy system. Most major players agree that taking apart the existing foundation of the interstate water management system would lead to chaos, which is why so much effort has been put into protecting the existing system, despite its obvious flaws. The desire to manage within and around the existing system has actually led to the most innovative solutions, whether it is shortage sharing to protect Las Vegas and the lowest priority users, or interstate water banking, or agreements with Mexico to store some of its water in US reservoirs. Whether the system survives another 20 years is another question, but it might be a good bet.

I do have one small bone to pick with Fleck, having spent 23 years in the trenches as a water manager in Arizona myself. I was struck by the characterization of Arizona as “its own worst enemy” in conversations about the Colorado River. Although it is true that Arizona has always played the victim card in its conversations with California and that the state arguably has been overly litigious, it’s also true that these tactics have actually worked. Although Arizona’s Central Arizona Project allocation has the lowest priority on the river, leading to lots of angst, Arizona has successfully protected its relatively large allocation.

Arizona’s behavior, if unbecoming at times, has also made substantial contributions to solutions. This is particularly significant when considering the state’s relatively small population and economic capacity. Its disproportionate contribution comes in large part from an array of impressive leaders who, over much of the past century, were engaged in the development of far-sighted water management schemes. Arizona has made significant investments in groundwater management, aquifer recharge, shortage sharing, interstate water banking, artificially created surplus, and international negotiations. The result is that Arizona’s water supply is relatively stable by comparison with other lower basin states, hardly supporting Fleck’s “worst enemy” characterization.

The most sobering thought I had while reading Water is for Fighting Over came while reading the quote in its conclusion from Brad Udall, a research scientist at the Colorado Water Institute. Udall distinguishes between “the reality of the people” and the “reality of the water community.” His point was that although the public’s perspective on water issues is often limited to “just want[ing] to turn on the tap and have water come out,” he also notes that the public has “some basic notions of fairness and good sense, imagining that the policies underlying our attempts to supply that water will consider questions of equity, sound economics, and the environment.”

Udall’s assumption that we can’t “violate the public’s sense of rightness” in our attempts to manage water makes a great deal of sense in a world where facts matter and where there are consequences for those who fail to be good stewards of the nation’s common resources. However, given recent events in US politics, it is hard not to question whether the public that Udall refers to has the same sense of rightness today as the one that elected his late father, Arizona Representative Morris K. Udall, a forceful environmental advocate and hero in Arizona’s water management efforts.

Mixed Messages about Public Trust in Science

For many years, the scientific community has been wondering—and often worrying—about the extent to which the public trusts science. Some observers have warned of a “war on science,” and recently some have expressed concern about the rise of populist antagonism to the influence of experts.

But public confidence in the scientific community appears to be relatively strong, according to a nationally representative survey of adults in the United States by the Pew Research Center in 2016. Furthermore, scientists are the only group among the 13 institutions covered in the General Social Survey conducted by the National Opinion Research Center where public confidence has remained stable since the 1970s. However, this favorable attitude is somewhat tepid. Only four in 10 people reported a great deal of confidence in the scientific community.

A series of other Pew Research Center studies, however, have revealed that public trust in scientists in matters connected with childhood vaccines, climate change, and genetically modified (GM) foods is more varied. Overall, many people hold skeptical views of climate scientists and GM food scientists; a larger share express trust in medical scientists, but there, too, many express what survey analysts call a “soft” positive rather than a strongly positive view.

There are, of course, important differences in opinions about scientists in each of these domains. For example, people’s views about climate scientists vary strongly depending on their political orientation, consistent with more than a decade of partisan division over this issue. But public views about GM food scientists and medical scientists are not strongly divided along political lines. Instead, views about GM food issues connect with people’s concerns about the relationship between food and health; most people are skeptical of scientists working on GM food issues and are deeply skeptical of information from food industry leaders on this issue. On the other hand, older adults, people who care more about childhood vaccine issues, and those who know more about science are, generally, more trusting of medical scientists working on childhood vaccine issues than are other people.

It is important to keep in mind that public beliefs about science and scientists aren’t necessarily indicators of trust, per se. One example involves public support for the products of scientific and technological innovation. A Pew Research Center survey found that about two-thirds of people see the effects of science on society as mostly positive, which is consistent with some 35 years of data from the General Social Survey.

Looking at the components of trust in science across these three scientific areas—vaccines, climate change, and GM food—two patterns stand out. First, public trust in scientists is stronger, by comparison, than it is for several other groups in society. For example, many more people report trust in information from medical scientists, climate scientists, and GM food scientists than information from industry leaders, the news media, and elected officials. On the other hand, no more than about half of people hold strongly trusting views of scientists in any of these domains. For example, only 47% of people say that medical scientists understand the health effects of the measles, mumps, and rubella (MMR) vaccine “very well.” Some 43% hold soft positive views of medical understanding about the MMR vaccine, saying medical scientists understand this issue “fairly well.”

The scientific enterprise is complex and so, too, is public opinion about science. The notion of trust itself has multiple dimensions. Public trust in scientists encompasses expectations about scientists’ actions, trust in scientists to be honest brokers of information, trust in scientific expertise and understanding, and trust in the motivations and influences operating on science research. Viewed through that lens, levels of public trust in science are quite varied, particularly across scientific domains.

Public confidence in scientists tends to be high compared with other groups

Public confidence in scientists is relatively strong compared with trust in other institutional groups. Only the military earns more confidence from the public. Still, people seem guarded. No more than a third of people report a “great deal” of confidence in any of these groups to act in the public interest. Overall, more people express positive than negative confidence in scientists, but a 55% majority express only a soft confidence in scientists to act in the public interest.

Funk_chart1

Scientists are seen as trustworthy communicators of science-relevant information compared with other groups

Public trust in scientists as sources of information is generally higher than it is for any of several other groups in society. For example, far more people trust medical scientists to provide full and accurate information about the health effects of childhood vaccines than they trust information from pharmaceutical industry leaders, the news media, or elected officials. The same pattern occurs for public trust in information from climate scientists and from GM food scientists.

But only a minority of people (39%) report “strong” trust in information from climate scientists. An equal share of adults say they have “some” trust in information from climate scientists, and about 22% are more skeptical, saying they do not trust information from climate scientists at all or not too much. The same pattern occurs for trust in information from GM food scientists.

Medical scientists are the most trusted of the three groups. Yet here, too, some 35% of adults have only soft trust in medical scientists to give full and accurate information about the effects of the MMR vaccine.

These findings suggest that the authority of scientists to speak on matters directly relevant to their expertise is often met with some skepticism.

Funk_chart2a

There is limited public trust in the knowledge and understanding of scientists in areas directly relevant to their expertise

Minorities of people express strong trust in scientists’ understanding of the causes of climate change (28%) or the health effects of eating GM foods (19%), saying that scientists understand these matters “very well.” About half of people (47%) say the same about medical scientists’ understanding of the health effects of the MMR vaccine.

At least four in 10 people say that scientists understand each of these matters “fairly well.” A sizable share of people are skeptical of climate scientists’ and GM food scientists’ understanding about the causes of climate change and the health effects of GM foods, respectively, saying that scientists understand these matters “not too well” or “not at all well.”

Perceptions of scientific consensus align closely with people’s views of scientific understanding. From the public’s perspective, there is considerable scientific disagreement about all three of these issues, particularly GM foods.

Funk_charts3

Half or fewer of people see the best available scientific evidence as routinely influencing scientific research in these domains

People hold mixed assessments about the influences operating on science research. About half of people (52%) say that the best available scientific evidence influences medical research on childhood vaccines “most of the time,” while some 36% say this occurs “some of the time” and another 9% say this seldom or never happens. There is even less public trust in research connected with climate change and GM foods; roughly three in 10 people say the best available scientific evidence influences climate research or GM food research “most of the time.”

Funk_chart4

Beyond the Primordial Ooze

“Real” Americans and the supposed divide between science and religion.

Jeff, John, Eldon, Dave, Ben, and Bruce meet most weekdays around the back table at the only McDonald’s in Ravenswood, West Virginia, chomping sausage McGriddles and swapping theories about why “it has all gone to hell.” One reason, they tell me, is because the aluminum plant south of town has shrunk from 12,000 to fewer than 1,000 employees, and another is that “people nowadays simply have no common sense.” The men offer a variety of examples, focusing on out-of-town visitors who can’t drive, don’t think, and huddle mindlessly, blocking the fast food eatery’s back entrance.

The six of them are retired, having once earned their livings as electricians, aluminum smelters, mechanical engineers, and dairy farmers. They seem inordinately proud of the fact that Ravenswood is said to have once had more churches per capita than any other town in America.

“We got one on every corner,” John boasts.

“We’re in the Guinness Book of World Records,” Jeff adds, while the rest of the men sip their coffee and nod.

It is a chilly late-March morning and I’m an out-of-town visitor as well, on a road trip to explore the notion that America’s current political divisions are tied somehow to conflicting attitudes about science and religion, rationality and faith. Ravenswood, with its many churches and dying aluminum industry, seems a likely spot to ask some questions.

Jeff jumps right in, all too happy to oblige my curiosity.

“Science and the Bible go together just fine,” he reassures me. “They’re finding that more and more once they track the DNA. In fact, they’re finding that the people who were in Egypt actually came from Europe.”

Jeff—mid-sixties, stubble-faced, sporting a US Marine Corps ball cap and green plaid shirt—speaks at a dizzying pace, rattling off more ideas than my pencil can handle. But from the looks of him, he’s just warming up.

“A lot of people don’t know this,” he continues, “but Einstein got his theory of relativity directly out of the Bible. Of course, he was threatened not to talk about it because the powers that be wanted to push evolution. Science and religion used to be the same thing, before the Tower of Babel. You know that, right?”

Do I?

Jeff’s theories on Einstein and Babel are news to me, but the others just chuckle and smirk, like maybe they’ve heard all of this before.

Dave leans forward. “Listen, if you want to know about Bigfoot and UFOs, that guy right there’s your best source.” He points to John, a red-faced, thickset man in dungarees and a stained white T-shirt. “He got them both up his holler.”

I’ve clearly lost control of the conversation, and we’re only a minute or so in.

John puts down his breakfast sandwich, scowls in Dave’s direction. “They’re just trying to get my goat, trying to make me mad.” Then he turns back to the out-of-towner, the scowl widening into a friendly grin. “But I’ve … never … been mad … a day in my life.”

“Oh really,” Bruce counters. “Not a day in your life? How many marriages you had?”

“Three, I think.”

Eldon, tall, lanky, and pushing 80, scolds John. “Now you tell this man the truth about those Bigfoot stories.”

“All he saw was the hair,” Jeff intervenes. “Some hair on a tree. He didn’t see no Bigfoot.”

“He did,” Dave insists. “He just couldn’t get close enough.”

And then silence, the Sasquatch thread apparently finished.

Until Jeff decides to fill me in on John and the UFOs.

“He was out taking a pee and he chased the aliens away. He saved the world.”

For the past year, I’ve been part of a project titled Think Write Publish: Science & Religion, an attempt to use the tools of creative nonfiction to explore the idea that faith and rationality can coexist just nicely, thank you, despite various brouhahas over where we came from, how we got here, and whether the human species is or is not in the process of destroying the planet.

As of late, thanks no doubt to a horrifically contentious election cycle pockmarked by extended, often hyperbolic skirmishes over both science and religion, Americans appear even more divided, locked away in separate, seemingly incompatible camps. That’s the dominant narrative in the media, at least, but my instinct is that it can’t be quite so simple as all that. I’m guessing the truth of it all is more complex, less predictable.

Which led me to Ravenswood, and to other small towns in West Virginia, Pennsylvania, and central Ohio, where I had a series of conversations with so-called “real” Americans: folks outside of politics and professional punditry, and apart from the expert, analytical academic bubble where I—a tenured professor, professional skeptic, and inveterate agnostic—spend most of my time.

I wanted to speak to people who were neither steeped in political rhetoric nor provoked into shouting by the presence of television cameras, and my questions were as simple as I could make them: Is the rift between those who favor science and those who follow religion as real and as wide as some suggest? Is there room for more complex, more nuanced views? If so, what do they look like?

One damp winter evening, I visit the Mills family in central Pennsylvania, a conservative swath of largely white, religious counties that consistently challenge the liberal vote tallies emanating from large urban outposts such as Philadelphia and Pittsburgh.

The Mills are devout evangelical Christians, meaning for them the Bible in the ultimate authority on all matters, every word true, a direct message from God. I join the parents, Don and Rhonda, and two of their three children in the family’s living room, in chairs prearranged into a conversational circle.

The two sons are just home on spring break from Grove City College. The older of the two, Samuel, plans to follow his father into engineering, while the younger, Isaac, a sophomore, is double majoring in biology and Biblical and Religious Studies, a combination I admit to finding surprising.

“Science and religion go hand in hand,” Isaac assures me. Confident and well-spoken, Isaac has close-cropped blond hair, the wide, square shoulders of a disciplined weightlifter, and just the hint of a beard. “There have always been strong Christians who are strong scientists. And those scientists could prove the theories that they came up with.”

He looks over at his brother and they both nod.

“In more recent history, though, there is the idea that you don’t have to prove what you believe in order for it to be true,” he continues. “Darwin, for example. He really was never able to prove each step in what is called evolution.”

Rhonda leans forward. “In today’s day and age,” she interjects, “opinions weigh more heavily than truth. Well, I hate to be a bearer of bad news, but not everybody’s opinion matters.”

“People follow what seems more exciting,” Isaac continues. “You know, is it exciting to think that something came out of the primordial ooze and changed to this and changed to that, as opposed to something being created? I mean, yeah, it seems exciting, but there’s not the evidence.”

I could argue that the idea of an all-powerful, white-bearded Creator waving his hands and fashioning all of this in seven days is just as electrifying as the idea of protohuman tadpoles crawling out of ancient muck. They’re both rather amazing, when you come right down to it. Isaac’s idea, on the other hand, that those who support evolution are merely caught up in the allure of the idea, seems to ignore most of what science knows about biology.

Isaac’s older brother, Samuel, anticipates my unspoken objection, jumping in to point out that scientific certainty can change over time. “During the Middle Ages, people thought mice came from grain, because whenever they opened a sack of grain, they saw mice running out. Today, that idea seems silly.”

“Another good example would be the Ptolemaic model of the solar system,” Isaac follows. “We thought the Earth was at the center, and then Copernicus came along, had the exact same data, but came to a different conclusion.”

Grove City College advertises “an academically excellent and Christ-centered learning and living experience,” so I feel safe guessing that Isaac and Samuel are presenting ideas learned in the classroom. They’ve paid attention, obviously, a fact that warms my professorial heart.

“Science is right, and the Bible is right,” Isaac explains further. “If they seem to disagree, it’s because our interpretation of the data is wrong.” He pauses briefly. “Or maybe our interpretation of the scriptures is wrong.”

This is some of the nuance for which I’ve been looking. Isaac is perfectly at ease with science, yet still holds the firm faith of an evangelical. Whatever problems that poses can be solved, in his view, with patience.

The father, Don, has been sitting quietly at the edge of the room, watching and listening. But when Samuel, a few months from graduation and looking locally for jobs in engineering, expresses disappointment that there are no openings at the plant where his father works, Don finally joins in:

“Yeah. The last administration did a lot to destroy the industry.”

“Coal?” I ask.

Don nods. He works as an engineer in the nearby town of Tyrone, making particle reduction machinery for the mining industry: “We crush coal, basically.” Samuel perks up, offering various examples of inconsistencies in “the data you see from Al Gore and that crew.” Climate data goes back only to the mid-1600s, he explains, “and they try to draw conclusions from ice cores, but I don’t think it’s enough.”

“Do you know where Al Gore’s family money comes from?” Don asks me.

I shrug, having no idea.

“Mining. I wonder if he’s going to give that money back.”

For a moment, I fear our conversation is going to veer into politics, marooning us on either side of the MSNBC/Fox News abyss. I’m also unsure how and to whom former Vice President Gore would return the family fortune. And then, Samuel surprises me.

“We heat our house with sustainable energy,” he announces proudly.

Isaac joins back in. “We actually heat it with the sun and the air, right?”

I look puzzled.

“We have a wood-fired furnace,” Don explains, pointing out the window to the tree-covered acreage behind the house.

“… and a very efficient wood burner,” Samuel overlaps. “We get our heat from the woods, and our syrup from the trees in the spring, and we’ve found a good balance of how much of our resources we use to maximize the efficiency of our property.”

I have liberal friends, environmentalists in their own minds, who do less than the Mills are doing. Whatever their views on global warming and fossil fuels, it is clear the boys enjoy how their steps toward sustainability prove wrong those critics who might want to equate climate change skepticism with energy gluttony.

About then it occurs to me that the house I’m sitting in, a crisscross of wooden posts and beams tying together the first floor with the second floor, and connecting the walls with the ceilings, might be part of the family’s sustainability effort as well.

“Did you build this?” I ask Don.

He smiles, glad that I came around to the realization. “Started excavating in 1995, the day Samuel came home from the hospital. In 1998, the day Isaac came home, we raised the frame.”

Isaac and Samuel joke some about growing up in the handmade house, how the network of posts, beams, and pegs formed a perfect climbing playset for two restless young boys. For a moment, they seem ready to jump up out of their chairs and illustrate.

But it is time for me to go, so the Mills can have their dinner. Rhonda walks me to the door, says she will be praying for me and for the success of the article I am writing.

“I don’t have all of the answers,” she shares, as I duck out into the chilly evening. “We can’t have all the answers, because God is God and we are not. And I’m fine with that.”

I’m fine with that, too: I’m not God. And to be honest, I’m not entirely sure how I even feel about God. I turned my back on organized religion in my late teens—like Groucho, I’m suspicious of any club, or church, that would have me as a member—but all of this had to start somewhere, right? You know, if there really was a Big Bang, who lit the damn match?

It seems clear—to me, at least—that neither science nor religion has the ultimate answer to the gargantuan question “Where exactly did we come from?” So, maybe a modicum of both faith and rationality are in order.

Or as Samuel Mills rightly points out, many Renaissance scientists were motivated by the desire to understand God’s plan in nature. Why can’t the two views simply coexist?

Thirty or so miles down the road, at Standing Stone Coffee in Huntingdon, Pennsylvania, I meet Deb Grove. Huntingdon is a railroad and manufacturing town, besieged, like most of the region, by the disappearance of blue collar jobs, but the coffeehouse sits near enough to Juniata College to have a hip campus feel.

Deb, with a PhD in biochemistry from Ohio State, worked a while in cancer research, then went on to direct Penn State’s Genomics Core Facility for 20 years. She is also a lifelong Baptist and identifies as evangelical.

“I was brought up in Ohio, with two hundred years of Baptists behind me,” she shares in a flat Midwestern accent. “Back in the ’60s and ’70s, being Baptist meant you weren’t allowed to dance and you weren’t allowed to have alcohol.”

Deb wears jeans, a striped shirt, a fleece vest sporting the American Birding Association’s logo, and the aura of someone who’s done taking crap from anyone. But then again, listening to Deb’s life story, it doesn’t sound like she’s ever had much tolerance for crap-givers.

The simple act of going off to college was, she explains, “a bit of a rebellion” for a Baptist girl in central Ohio in the 1970s. The idea of an advanced degree in biology was even more unusual, given her strong evangelical roots.

“Frankly, though, once I was I grad school, I got more grief about my gender than I did around my religion,” she tells me. “The chairman of a department I was applying to told me, ‘I don’t think women should go to grad school at all. I have daughters and I don’t think they should do this.’”

But she persisted, as the saying goes. On the day we talk, Deb has been retired for almost a year, trading in days spent sequencing the DNA of coral, ancient bison, and bacteria at the Penn State genomics lab for wandering nearby forest land in search of scarlet tanagers and golden-winged warblers.

Her LinkedIn page lists her “current” job description as:

  1. Stay in Bed as long as I want
  2. Get up and have some coffee
  3. Get some exercise
  4. Go Birding, Go Birding, Go Birding
  5. Try out my “new” used golf clubs, visit the local bowling alley, etc etc etc

I overcome a momentary surge of jealousy to ask how she managed to strike a balance at work between the empirical, evidence-based nature of science and the Christian acceptance of revelation and faith.

“I’ve never had a problem with being a scientist and a believer. I don’t see any contradiction, though a lot of people do,” she answers.

Even the concept of “creation,” one of the stickier issues separating people of faith from scientific orthodoxy, doesn’t cause Deb any sleeplessness. “For me, the idea in the Book of Genesis was that there was a Creator, and that’s as far as it goes. The Creator did this, the Creator did that. The details aren’t that important.”

And evolution?

“Microevolution is easy to see. The problem with macroevolution is that you can’t set up an experiment to prove it. So, you look at what evidence is there and you draw your conclusions.”

The conclusion she has drawn is that evolution makes sense.

“For some people in the church, my views are wrong. But I believe we are created in God’s image, with certain characteristics, and one of those is intelligence. The pseudoscience and antiscience people are driving me nuts. I want to tell these people, ‘You’re not using the intelligence God gave you.’”

I ask her if she was open about her faith among her coworkers and fellow scientists over the years, or if she primarily kept it under wraps.

She closes her eyes a second, as if tallying up, before answering.

“Well, I did keep it secret, sort of.”

She pauses again.

“I mean, if you call yourself evangelical you should be witnessing all of the time.” By witnessing, she means sharing the good news of the Lord with everyone she meets. “But I guess my approach was: if people want to talk with me about it, fine.”

She pauses, considers her answer even further. “God is going to direct people the way they need to go. I’ve seen that in my own life … in the ways that I’ve been directed.”

One more pause, and a nod.

“So, okay, maybe that’s more supernatural than a scientist would normally be, but that’s my spirituality. It’s a leap.”

Later that day, I leap across the Juniata River to meet Jeff Imler, a biology teacher for 34 years at Williamsburg High, home of the “Blue Pirates.” Jeff is in his late fifties, a bit baby-faced despite the gray whiskers peppering his goatee. He lines up nicely with my stereotype of how a high school science teacher should look: blue dress shirt (the school color), a blue-and-silver tie with slanted stripes, thick aviator eyeglasses, and a pen or two tucked into his shirt pocket.

Williamsburg is part of “The Cove,” a narrow valley nestled into Pennsylvania’s Bible Belt, and deeply conservative. I enter the room laden with questions as to how one negotiates teaching biological science—and accepted scientific views on evolution—in such a school district.

Jeff startles me, however, by insisting right off the bat that there’s no problem at all. “None,” he smiles. “Never had a parent complain or a kid complain regarding that subject area.”

“Thirty-four years is a long time,” I say. “Zero complaints?”

“Never had any trouble.”

“Really?” I’m struggling to imagine how this could be. “Not once?”

I attempt to nudge Jeff’s memory with a rather insipid joke about parents storming the classroom with torches and pitchforks, but he just shakes his head. “I think the only teachers that get into trouble are the ones that hammer evolution and tell the kids that there is no God. I’ve never done that. I’ve always taken the position with the kids that I’m not here to tell them what to believe.”

“So,” I ask, “what do you believe?”

“I believe in God, and I’ll share that with the kids. I’ll tell them that I don’t like to believe that I came out of some primordial ooze somewhere. I’d rather believe there was a divine entity that made all of this happen.”

The primordial ooze again. I’d always thought the notion that humans were directly descended from lowly, jibber-jabbering monkeys was the objectionable part of evolutionary theory, not the bubbling mud. The idea that primordial ooze, or to be precise, “primordial soup,” was a petri dish for life was put forward a full half-century after Darwin’s writings, and it is just one of several theories as to where it all might have begun. But the idea rankled Isaac Mills, and it rankles teacher Jeff as well.

“So, you don’t actually believe in evolution?” I ask.

“I do. Any organism, whether bacteria or a large mammal, that adapts to its surroundings, survives, continues to reproduce, and passes its genes on to their offspring, that’s evolution. If students want to believe that that happens by divine inspiration, that’s up to them. If they want to believe it’s by happenstance, that’s okay, too.”

Jeff stops and lifts his eyebrows, gauging my reaction.

“So, what about human evolution?”

“I don’t believe, personally,” he answers, shrugging and looking down, “that that happened.”

Though fossil evidence of early humans, such as Cro-Magnon man, is clear enough, Jeff clarifies, he doesn’t think those early ancestors are the result of evolution at all, but were instead put directly on the planet by divine intervention.

“If my students want to believe that all of this happened because of God and creation, that’s fine. If they don’t want to believe that all of this happened because of God and creation, that’s fine, too,” Jeff finishes. “Me? I just don’t want to think I came out of the blob millions of years ago.”

It becomes clear to me just how little I understand about how high school biology is taught in the twenty-first century. I thought the “scientific findings prove evolution to be true” approach was fairly standard, but I was wrong. In fact, just a few years ago, a survey of nearly one thousand public high school biology teachers showed that more than half—labeled “the cautious 60 percent” by the survey authors—present both the creationist side and the evolution-as-fact side and let the kids sort it out themselves.

I like Jeff and appreciate his candor, but he seems a bit hard to pin down. Evolution at the cellular level is easy to accept no matter what your faith, but as to the deeper question—how did humans happen to arrive on the planet—his answers seem evasive at best.

Maybe that’s necessary if you teach in The Cove, or maybe it’s because I’m sitting in front of him, notebook in hand, doing my best thoughtful interviewer nod, and asking questions that are none of my business. Whatever the reason, Jeff clearly fits somewhere in the middle of the supposed unbridgeable divide, proof that simple answers and strict categories will never capture the full picture.

My roundabout search for folks who inhabit some middle territory in the science-faith debate eventually leads me to Pete Yoder. He farms 1,600 acres of corn and soybeans just outside of London, Ohio. The corn is sold for use in making ethanol and corn sweetener, while most of the soybeans end up as tofu.

It is a large operation. Pete, cheerful, energetic, and commendably fit for a man in his late fifties, takes me on a brisk tour of the barns and outbuildings scattered across his sprawling property, stopping to explain each of the many machines he employs to run his farm: small tractors, large tractors, combines, headers, cultivators, grain conveyors, harvesters, ammonia spreaders, and even a pair of hopper-bottom 18 wheelers. He might as well be a kid showing me his Matchbox car collection, except these vehicles are real, and massive.

Many of them are GPS-guided, allowing him to track what has been planted, what has been fertilized, all of it cross-referenced with previous years’ yields, field by field, row by row. Pete clearly enjoys what he does, using the term “fun” repeatedly as he articulates how seed is fed into the spreader, how corn is cut, or how ammonia is “knifed” down into the soil.

After the tour, we retire to the maroon-sided farmhouse where he and his wife, Mary Ette, raised three now-grown children. Pete’s office, just off the family dining room, has a window looking out on a birdfeeder, populated by hungry grackles and a red finch or two.

“I’m a Christian, a person of faith, and I have no problem reconciling my faith with science,” Pete tells me as we sit on opposite sides of a large desk covered with farm catalogues. “Probably where I have incongruencies with my practices and what I believe—where those two don’t meet—is more in my political views. I find myself at odds with a lot of my fellow farmers.”

That’s an understatement, given the mainstream conservatism running through rural Ohio, and given Pete’s decidedly progressive views. A “Black Lives Matter” sign sits in a flower patch in his side yard, conceivably the only such sign in all of Madison County.

I ask what the neighbors think, and he laughs. “They’re used to me by now.”

Pete and his family are practicing Mennonites, a Christian denomination that runs from highly conservative—Old Order Mennonites share many practices with the Amish—to more modern. Traditionally, the more conservative Mennonites reject climate change, but Pete is part of a nascent Mennonite progressive movement embracing conservation and sustainability.

He employs a “no-till” method on his land, for instance, planting soybeans between the previous year’s corn rather than cutting the stalks and plowing them under, limiting erosion and chemical runoff. What becomes clear to me as we talk is that Pete’s focus on state-of-the-art farm machinery and fancy GPS guidance systems is not just farm-nerd gadgetry but connects directly to his wish for sustainability: each acre he doesn’t till, each row that requires less chemical treatment, every step that allows him to use less horsepower in his machinery and burn less fuel, is an environmental act.

He shrugs when I ask about this: “My farmer friends all laugh at the idea that a fifteen- or twenty-thousand-dollar addition to a tractor is going to save the world from climate change. They just scoff.”

Pete’s sustainable farming practices are based in science, but for Pete the practices are a spiritual matter as well. He was among the first in his part of Ohio to place an agricultural easement on his land, guaranteeing that it will remain a farm in perpetuity. Though he deeply loves farming, he constantly worries about the long-term effects.

“Just the other day I removed a fence row,” he explains, meaning he turned a patch of wild, uncultivated land into land that could be planted. “But I know that I was also removing habitat for animals and birds. I look out at this landscape here and know it was once wooded, yet I continue to take down trees.”

His voice softens. “I used to want to own a farm, but the older I get the more I think of myself as just a caretaker.” He motions out the window, to the field across the road, a vast expanse of flat land and dried corn stalks. “I know I’m going to be out of here someday. I’m trying to think about what I’m leaving behind.”

My attempts to verify that Ravenswood, West Virginia, had so many churches per capita that it was once listed in Guinness come up empty. It may be just another myth, like Bigfoot, or the idea that America’s views on science and religion can easily be pigeonholed.

They can’t.

Nor are the two approaches necessarily at odds. Science and religion are both modes of inquiry, and both can help us to experience our world in richer, deeper ways. Choose one, choose the other, or if you can, choose a bit of both.

Yet for many people, evolution seems to be the sticking point. How did we get here? The idea that an all-powerful divine architect simply waved his hand and created us from nothing has a certain appeal. But to some of us, it is unacceptable, based too much in faith and unprovable religious teachings, what some call myths, going back thousands upon thousands of years. And of course, it raises the question “Why?” What did this divine architect have in mind? What’s our purpose here?

The pure evolutionary perspective, the similarly sticky “primordial ooze,” has its own shortcomings. It is scary, for one thing. Are we out here on our own, undirected, no divine plan? The idea of unorchestrated evolution also suggests we are not actually so special. Not chosen. What’s to keep the orangutans from hitting the genetic adaptation lottery one day soon and jumping the line?

Humans have been wrestling with these questions for as long as they’ve been stringing two thoughts together one after the other. I’m guessing the riddle of it all won’t be resolved anytime soon.

It takes some prodding, but I eventually get my retiree friends at McDonald’s to weigh in on the evolution dilemma.

Eldon, the eldest and one of the quieter of the men crowded around the table, firms up his mouth and shakes his head. “I’m not going to answer that.”

Bruce agrees. “Not a thing I really want to talk about.”

But Jeff, true to character, just can’t seem to keep his mouth shut. “We’re in the Bible Belt,” he chuckles. “We don’t believe in evolution.”

John takes the final bite of a fried hash brown. “My ancestors didn’t swing from no trees by their tails. They used their hands.”

The men are enjoying themselves. That much is clear.

“Yeah,” Jeff snorts. “Maybe so. But they still flung their poop like a monkey.”

Finally Dave enters the fray, his tone more serious. “I do believe in the Bible, and I believe in evolution. Evolution is simply the improvement of the species. Well, if you know anything at all about animal husbandry, the hog … You look at the hog, and you can see it has changed in my lifetime. It used to be shaped like this in the back—” he makes a small arch with his hand “—and now they’re flat. That’s evolution.”

“Huh,” John counters. “Science just went and made those hogs longer ’cause they wanted more pork chops.”

Jeff nods. “Yeah. And more bacon.”

There is, for the moment, enthusiastic agreement that science and religion are both fine, as long we have more bacon. Then my Ravenswood comrades commence downing their last sips of coffee, pulling on jackets, and making for the door.

Breakfast is over, until tomorrow.

The Best Panceas for Heartaches

Standing before a crowd of listeners in 1914, the fundamentalist preacher Billy Sunday took a few moments to ridicule what he considered to be science’s pretensions of being a new salvation. “People are dissatisfied with Philosophy and Science and New Thought as panaceas for heart-aches!” he cried:

It does not amount to anything, when you have a dead child in your house, to come with these new-fangled theories .… Let your scientific consolation enter a room where the mother has lost her child. Try your doctrine of the survival of the fittest. Tell her that her child died because it was not worth as much as the other one! … And when you have gotten through with your scientific, philosophical, psychological, eugenic, social service, evolution, protoplasm and fortuitous concourse of atoms, if she is not crazed by it, I will go to her and after one-half hour of prayer and the reading of the Scripture promises, the tears will be wiped away and the house from cellar to garret will be filled with calmness like a California sunset!

Billy Sunday was not known for nuance; a journalist once described a Sunday sermon as “the most condemnatory, bombastic, ironic and elemental flaying of a principle or a belief that [he] ever heard in [his] limited lifetime and career from drunken fist fights to the halls of congress.” The contrast Sunday describes is indeed stark: for someone faced with the death of a child, science leads to despair and madness, while Christian faith leads to a deep sense of peace. Though hyperbolic, Sunday’s condemnation of what he presented as scientists’ claims to provide both salvation and solace efficiently—even eloquently—captured profound, long-standing tensions between the promises of Western science and the obligations and goals of Christian faith.

I have taught courses on the history of science and religion, evolution theory, and medicine for more than a decade now. But although it is my job as a historian to try to understand the complex factors behind positions and beliefs, I never quite grasped what might be at stake in Sunday’s belligerent sermon against science—and, indeed, in the long-running debates among fundamentalists, modernists, and atheists—until a few years ago, when I witnessed the struggles of dear friends during the illness and loss of their six-month-old baby girl. Claire was born with a congenital condition that meant her heart and liver could not function properly. Surgeons made four attempts to repair the broken pump, the clogged filter, and the missing tubing; all ultimately failed.

In many of my classes students learn about modern science and medicine’s beginnings in seventeenth-century mechanical philosophy. Thinking of the body as analogous to a machine led not only to arguments about God as the Designer but also to the idea that broken parts might be fixed through surgery. That foundation has led to many of the greatest triumphs of modern medicine (though, in the intervening centuries, discussions of “God as the Designer” have receded from scientific texts). Yet all of this seemed of little comfort when the doctors could not, in fact, fix beautiful little Claire’s broken mechanisms.

Amid witnessing doctors’ efforts to preserve a child’s life, and her devoted parents’ struggle to understand medicine’s failure, I began paying more attention to certain biographical facts in the lives of the scientists—and science-watchers—I read with my undergraduates. The seventeenth-century naturalist John Ray, who wrote one of the most famous books about God as Designer, lost his daughter Mary when she was 12. The Enlightenment’s Erasmus Darwin, who developed one of the first theories of evolution, buried three of his 12 (legitimate) children when they were infants. The codiscoverer (with Charles Darwin) of natural selection, Alfred Russel Wallace, lost a boy at six, and “Darwin’s bulldog,” Thomas Henry Huxley, buried his four-year-old, firstborn son. Botanist Joseph Dalton Hooker lost his little girl Maria when she was six. (Within an hour of her death he wrote to Charles Darwin, who lost a three-week-old infant, Mary Eleanor, in 1842; a ten-year-old daughter, Annie, in 1851; and an 18-month-old son, Charles, in 1858. “I think of you more in my grief,” Hooker confided, “than any other friend. Some obstruction of the bowels carried her off after a few hours alarming illness—with all the symptoms of strangulated Hernia.”) Mary Harriman, a philanthropist who bankrolled American eugenics work, lost a five-year-old boy to diphtheria. Annie Besant, who tried to convince Darwin to support her campaign for contraception, became an atheist after watching her seven-month-old daughter struggle with a terrible bout of whooping cough. One could go on and on.

None of this, of course, is surprising to anyone familiar with both the state of medicine and the prevalence of childhood infectious diseases prior to the twentieth century. And children’s deaths are acknowledged, at times, as important within the biographies of these influential men and women and their friends. Indeed, the influence of the loss of Darwin’s daughter Annie on his beliefs, including his theory of evolution, is the subject of an entire book and a major motion picture. But—perhaps because the loss of a child is not something many of us, at least in certain parts of the world, have to experience thanks to modern medicine and public health—I had never really thought through the commonality of my subjects’ experience with childhood death and suffering until I witnessed Claire’s parents struggling to reconcile the efforts and failures of science with God’s providence. This heightened attention to certain events in men and women’s lives, and certain paragraphs in their writings, made Sunday’s sermon, in particular, stick in my mind. I began to wonder: What role has what is said to, or believed by, parents at the bedside of a dying child played in individuals’ perceptions of the relationship between science and religion? Have the available stances on both God and Nature amid these tragic confrontations with suffering influenced individuals’ decisions on whether that relationship is one of harmony, conflict, or something in between? These questions are, in many ways, impossible to answer, for often such loss is accompanied mainly by profound silence. But asking them revealed what I find to be a very meaningful thread in many of the primary sources I use in my research and teaching.

The thread begins in the seventeenth century, amid the grand theories associated with the Scientific Revolution, but to notice it one must pay close attention to the diaries and correspondence of famous figures in the history of science, and not just their classic works. Consider, for example, that six years after the first edition of his famous natural theology, The Wisdom of God Manifested in the Works of Creation, appeared, John Ray lost one of his four beloved daughters to jaundice. “My dear child,” he wrote to Hans Sloane in early 1697, “for whom I begged your advice, within a day after it was received, became delirious, and at the end of three days died apoplectic, which was to myself and wife a most sore blow.” A month later Ray wrote of the continued influence of this “sad accident” on his ability to work. His wife, he wrote, “is full of grief, having not yet been able fully to concoct her passion.” He blamed himself, for he had not given the little girl a remedy that had proved effectual for himself in the same disease. But he does not seem to have blamed or questioned his beloved all-powerful, all-wise, and benevolent God.

I have often assigned The Wisdom of God as an example of seventeenth-century natural philosophers’ devout belief that science and religion are in harmony. Ray reveled in detailed descriptions of animal and human anatomy and used the extraordinary fitness of animal parts to their uses to demonstrate the existence and attributes of God. And indeed his work is a good example of the belief—common at the time—that God gave men two books through which to know Him: the Book of Scripture, and the Book of Nature. Nature, Ray argued, helped one make “out in particulars” what Scripture asserted in general concerning the Works of God, namely In Wisdom hast thou made them all. In describing human anatomy, Ray dwelled on the purposeful parts of the body as beautiful examples of the effect of wisdom and design. Thus, he concluded, the body of man was “proved to be the Effect of Wisdom because there is nothing in it deficient, nothing superfluous, nothing but hath its End and Use.” Indeed, Ray insisted that a man who could look upon Nature and yet still disbelieve in God “must needs be as stupid as the Earth he goes upon.”

My students tend to want to throw counterarguments at John Ray: What about snakes? What about predators? What about disease? But inevitably Ray knew a lot more about disease and suffering than they do. His was not a naïve theodicy (an explanation of why a good, all-powerful, all-knowing God permits evil and suffering). When Ray reflected upon the fact that sleep alleviates pain as evidence of the wisdom of a God, he spoke from experience. At the time of writing his famous book, he suffered from blisters and chilblains; ulcers on his legs sometimes prevented him from walking; and his stomach gave him digestive trouble that incapacitated him for days. Illness, disease, and death were close, familiar, and ever-present to men and women in the seventeenth century. Nearly a third of children died before age 15. The bubonic plague still periodically swept through London and its outskirts. John Ray knew all too well that human beings die from diseased organs, succumb to madness, and suffer from malfunctioning parts. But that by no means vitiated his argument: indeed, the whole point of his book was that in the face of widespread pain and suffering, the marks of design proved God’s benevolence, wisdom, and goodness. Toward the end of his life, Ray was at times so reduced to weakness by the sharp pain of chronic sores on his legs that he could not stand alone, and he even confessed to despairing of life itself. Some days his sores so spoiled his memory that he could not pay sustained attention to the animals and plants he so loved to study. Yet even as his memory and body failed him, so that he was “almost continually afflicted with pain,” he urged his friend James Petiver to continue the task of “carrying and promoting natural history and the knowledge of the works of God.”

Upon first reading, submission and obedience to one’s God-given lot in life seems the main message of natural theology classics such as The Wisdom of God. After all, some people, such as St. Bernard, the medieval French abbot of Clairvaux, argued that “to consult physicians and take medicines befits not religion.” Yet we know from Ray’s letters that he was anything but submissive in the face of bodily pain. His letters are full of new prescriptions tried and disappointment on the heels of great hope of relief. And remember he blamed himself, not God, for his daughter’s death, on the grounds that he had not given her the correct medicine. But how was the anxious search for a medicine to heal his terrible sores to be reconciled with devout belief in a wise, all-powerful, benevolent God?

The answer to this question—and the explanation of Ray’s stance at the bedside of his dead child—lies in the fact that Ray viewed medicines as God’s gifts, albeit gifts that would be revealed only through human effort. He envisioned mankind taking up the tools provided by a wise and good God to improve the human condition. Ray spoke of the human hand, for example, as “wonderfully adapted” for all the uses that made man an agent of civilization and improvement. He believed that God had placed man “in a spacious and well-furnished world,” full of beauty and proportion, with materials to be molded and land capable of improvement by industry. God’s provision included seeds and fruit capable “of being meliorated and improved” by human art, and useful for food and medicine. Ray described plants such as the Jesuit’s bark tree (quinine) and the poppy (opium) as clear evidence of “the illustrious Bounty and Providence of the Almighty and Omniscient Creator, towards his undeserving Creatures.” And—this is key—Ray was sure “there may be as many more as yet discovr’d, and which may be reserv’d on purpose to exercise the Faculties bestow’d on Man, to find out what is necessary, convenient, pleasant or profitable to him.”

Ray worshipped a God, then, who had organized the world and the mind of man so that men could improve upon their surroundings through studying natural philosophy and natural history. God had even made man a social creature, so that he could improve his understanding “by Conference, and Communication of Observations and Experiments.” (What a perfect justification for attending a Royal Society meeting!) Ray’s attitude was an early example of the belief that one could and should improve life in the here and now, even amid deep faith in the hereafter. Critically, that stance shifted the blame for earthly evil and suffering to man’s ignorance. Faced with the death of a beloved, as hard as it was to blame oneself, at least one need not blame one’s God.

The trajectory of this bargain—and it was a bargain, with important costs and benefits—is fascinating. The historian John Hedley Brooke has described how despite seventeenth-century natural philosophers’ insistence that natural laws were not binding on God, the pressure to make them so arose directly from the wish to address the existence of suffering. Even Robert Boyle, a founder of the Royal Society, who was said never to have mentioned the name of God “without a pause,” thought it “perhaps unreasonable” to expect God to intervene in natural law to save an individual (to suspend, for example, the law of gravity when someone fell over a cliff). At the time, that temptation to transfer agency (and thus fault) to Man rather than God often removed God to some distance. Take Erasmus Darwin’s epic evolutionary poem, The Temple of Nature. At one point Erasmus describes the slaughterhouse of the warring world—predation, pestilence, famine, earthquakes, flood—and wonders:

Ah where can Sympathy reflecting find
One bright idea to console the mind?
One ray of light in this terrene abode
To prove to Man the Goodness of his God?

Erasmus’s reply was that so long as one placed all the good and all the evil on the scale, “where the Good abides, / Quick nods the beam, the ponderous gold subsides.” Lest a reader miss the point behind Erasmus’s elaborate lines about Nymphs and Muses, he circled back to it in a footnote later in the poem:

When we reflect on the perpetual destruction of organic life, we should also recollect, that it is perpetually renewed in other forms by the same materials, and thus the sum total of happiness of the world continues undiminished; and that a philosopher may thus smile again on turning his eyes from the coffins of nature to her cradles.

One can almost imagine Erasmus, thinking of the cradles in which his own babes lay, grasping for some underlying goodness in it all. Once we abandon the comforting fairy tale that men and women of prior ages were not as attached to their children, we can see the author’s deep experience with the large potential for misery and suffering in the world within these lines. (The fairy tale was apparently first told by the social historian Philip Aries in his 1960 book, Centuries of Childhood. Perhaps it is an indication of how truly unimaginable such a state of existence was by the mid-twentieth century; so unimaginable that it was imagined away. Historians of the early modern period have provided extensive—and heartbreaking—evidence that mothers and fathers experienced extreme anguish at the loss of their children.) Erasmus insisted there must be a Goodness to it all, despite puerperal fever robbing young husbands of their wives. Despite the dozens of infectious diseases that robbed young mothers of their infant children. But one had to take the long-term view to witness such goodness, to see that the good outweighed the bad and that “the sum total of happiness of organized nature” increased, rather than diminished, with death. And this is where things really get interesting. For in contrast to John Ray, Erasmus believed in transmutation—evolution, in modern parlance. In his view, progressive change in biological forms provided good evidence of an overall Goodness to the plan of creation, despite death and struggle. Hope could also cling to the intellectual and technological progress of mankind, rooted in the study of natural law:

Last, at thy potent nod, Effect and Cause
Walk hand in hand accordant to thy laws;
Rise at Volition’s call, in groups combined,
Amuse, delight, instruct and serve Mankind.

A footnote explained how those who discover causation furnish the powers of producing effects. These were the men who discovered and improved the sciences “which meliorate and adorn the condition of humanity.” For Erasmus, both the evolutionary progress of life and the intellectual progress of man proved the goodness of the system. Though the distance of Erasmus’s “First Cause”—which created the rule of natural law “perhaps millions of ages before the commencement of the history of mankind”—would have caused John Ray great distress, he would have sympathized with the belief in science as the means of ameliorating the human condition. And certainly he agreed that, on balance, the system proved God Good.

The thread evident in both Ray’s and Erasmus Darwin’s work might be called a “Science as God’s Provision to Ameliorate Suffering” theodicy. And it is perhaps most eloquently stated in the concluding pages of Vestiges of the Natural History of Creation, a Victorian sensation published anonymously in 1844. Scientists, including Charles Darwin’s mentor Adam Sedgwick, condemned the book as atheistic, and historians note his reaction as at least partly explaining Darwin’s famous 20-year delay in publishing On the Origin of Species. We now know the author of Vestiges was Robert Chambers, a Scottish publisher; when asked why he had not put his name to his work, Chambers gestured to the house in which resided his 11 children and replied, “I have eleven reasons.” The concluding chapter of Vestiges provides a telling portrait of what was at stake for some Victorian readers faced with either a close versus a distant Creator (a distinction that often mapped onto static versus evolutionary creations): “How, the sage has asked in every age,” Chambers wrote, “should a Being so transcendently kind, have allowed of so large an admixture of evil in the condition of his creatures?” The question must have pressed on Chambers and his wife, Anne, amid the death of three of their 14 children in infancy. In the pages of Vestiges Chambers’s reply to the age-old question was as follows: The fixed laws established by the Deity were his most august works, permitting great good. But left to act independently of each other, those laws could have effects only generally beneficial, since often there must be interference of one law with another, and thus evil be produced. He gave the following example:

Suppose … that a boy, in the course of the lively sports proper to his age, suffers a fall which injures his spine, and renders him a cripple for life. Two things have been concerned in the case: first, the love of violent exercise, and second, the law of gravitation. Both of these things are good in the main. In the rash enterprises and rough sports in which boys engage, they prepare their bodies and minds for the hard tasks of life. By gravitation, all moveable things, our own bodies included, are kept stable on the surface of the earth. But when it chances that the playful boy loses his hold (we shall say) of the branch of a tree, and has no solid support immediately below, the law of gravitation unrelentingly pulls him to the ground, and thus he is hurt. Now it was not a primary object of gravitation to injure boys; but gravitation could not but operate in the circumstances, its nature being to be universal and invariable. The evil is, therefore, only a casual exception from something in the main good.

Chambers then addressed the question of what one must do in the face of this knowledge. “The Great Ruler of Nature,” he wrote, “has established laws for the operation of inanimate matter, which are quite unswerving, so that when we know them, we have only to act in a certain way with respect to them, in order to obtain all the benefits and avoid all the evils connected with them.” Yes, great suffering existed, but in the unity of nature’s laws the First Cause had benevolently provided the means of escape. Once man saw the human constitution as merely a complicated but regular process in electrochemistry, for example, the path toward elimination of disease, “so prolific a cause of suffering to man,” became clear: to learn nature’s laws, and to obey them. This was an answer to the problem of suffering that could combine the endeavor of science with a deep faith in the benevolence of God’s plan. Too, it offered a pious defense of why science should be valued and supported.

Indeed, perhaps one of the most interesting productions of this “Science as God’s Provision to Ameliorate Suffering” thread is Andrew Dickson White’s 1896 History of the Warfare of Science with Theology in Christendom. This book has often been used as evidence that science and religion have always been in inevitable conflict. And yet, as historians have often pointed out, White insisted “true religion” was not in conflict with science. Indeed, he believed his book tracked the development of a truer Christianity, in which human beings could trace God’s providence and goodness in humanity’s movement away from dependence and submission to the environment, toward controlling the forces of nature to satisfy the wants of humanity. One profound example White gave of orthodox theology hindering this progressive movement—of both religion and science—appears in a discussion of the medieval church’s (supposed) persecution of Roger Bacon for pursuing natural philosophy:

In two recent years sixty thousand children died in England and in Wales of scarlet fever; probably quite as many died in the United States. Had not Bacon been hindered, we should have had in our hands, by this time, the means to save two thirds of these victims; and the same is true of typhoid, typhus, cholera, and that great class of diseases of whose physical causes science is just beginning to get an inkling.

White was called out for the strangely unhistorical passage at the time, but this brief but weighty tirade against any interference in science makes sense when you consider it was written by a man who had almost lost a son to typhoid a few years earlier. For some people, at least, White’s account of science triumphing over orthodox theology became a lens through which God’s immanent presence in history could be seen. And through that lens some found a path toward harmonious relations between science and religion. Indeed, the naturalist Karl P. Schmidt recalled that White’s book contributed most to his own reconciliation with religion. And White’s grand narrative inspired the Catholic modernist George Tyrrell to try to reconcile theology with modern science, rather than assume such reconciliation was impossible.

The key to both Schmidt’s and Tyrrell’s responses to The Warfare, I believe, is that White gave readers an opportunity to find evidence of God’s benevolence in man’s increasing ability to ameliorate suffering through science. That opportunity was embraced in Contributions of Science to Religion, edited by the famous modernist theologian Shailer Mathews. Published in 1924, the book was an attempt to counter increasing talk of a conflict between God and Evolution, most evident in William Jennings Bryan’s campaign to pass legislation against teaching evolution in American schools. In his contribution to the book, the influential sociologist Ellsworth Faris described White’s History as telling the story of an ongoing change from dependence and submission to conscious intervention and control. Human nature itself was being brought within the realm of the sciences of psychology and sociology, opening up the hope that it could be controlled. And thus, Faris noted, “war, poverty, and crime which were formerly defended, apologized for and even conceived as a part of the divine plan, appear to our modern eyes as problems to be solved, as challenges to the technique of control which scientific men persistently seek.” Faris did not explicitly attribute the possibility of progress in the sciences to God, but another contributor, Eugene Davenport, did, writing: “Whoever soberly considers what science has achieved for agriculture in the short space of half a century, can but render thanks to Almighty God for His revelation of the laws of nature, and he will face the future with confidence unlimited and with gratitude unbounded.”

But it was exactly these kinds of “scientific consolations” Billy Sunday had railed against 10 years earlier. Sunday found scientists’—and liberal and modernist Christians’—emphasis on nature’s laws a poor kind of salvation, which seemed to sacrifice the truly redemptive power of prayer, belief in miracles, and the comforting promise of Heaven. It was useless, he insisted, at the bedside of a dead child. By marked contrast, liberal and modernist ministers described the very fate of Christian faith as at stake if Americans turned to Sunday’s brand of Christianity—a Christianity that included, for example, petitionary prayer. In a 1926 sermon, the Unitarian Reverend Harold Speight described how often he heard people complain bitterly of unanswered prayer: “The desired aid did not arrive, the sickness was not stayed,—and then faith went, as a candle flickers and goes out if an outside door is open.” And yet, Speight urged, it was at that very moment that science could reestablish and strengthen faith. If men and women only understood that at the moment of loss, it was not God who was absent, but the scientific knowledge required to control nature—that someone’s ignorance “accounted for the disaster which prayer had failed to avert”—then not only could faith remain, but action could be “diverted as rapidly as possible to the purposes of science” so that men and women could be of better aid in the future. Speight believed, in other words, that God had organized the world in such a way that skill could be improved, albeit slowly and laboriously, via science. Indeed, for Speight, doing science became a better form of prayer, for in progressively alleviating suffering and pain, human effort and ingenuity would ultimately vindicate faith in God’s benevolence, power, and wisdom. This, for Reverend Speight, was not just “scientific consolation” but a religious call to trust in natural law and pursue scientific progress.

John Ray, Erasmus Darwin, Robert Chambers, Shailer Mathews, Andrew Dickson White … all, despite their theological differences, would surely reply “Amen” in theory. But at the bedside of a lost child, both believers and nonbelievers must concede that Speight’s optimistic demand to take the long-term view is perhaps too weak a comfort for the human heart. I note above that inspired by Claire, I began noticing—for the first time—a meaningful thread in the primary sources I study with my students. Why do I think this thread is meaningful and important? Because I believe that in attending to the moments and experiences where decisions regarding the relationship between science and faith are at their most starkly personal and intimate, we might develop a more empathetic understanding of both historical and present stances, whether they agree with our own or not. For who can judge the response of a mother or father at the bedside of a dying child—in the seventeenth century or the twenty-first? That an individual’s attitude toward science may be intertwined with answers to why God would allow such things, or whether God exists at all, is worth attending to. That attention might produce a more historically accurate portrait of the factors involved in controversies over the relationship between science and religion. And just as important, it will help ensure that we view stances through a more compassionate lens, sensitive to the meaning found (or lost) in moments of both misery and bliss.

“Shuddering Before the Beautiful”: Trains of Thought Across the Mormon Cosmos

If then th’ Astronomers, whereas they spie
A new-found Starre, their Opticks magnifie,
How brave are those, who with their Engine, can
Bring man to heaven, and heaven againe to man?

—John Donne

If you take the Green Line from the Salt Lake City International Airport to the Temple Square TRAX station downtown, you’ll be within walking distance of the Salt Lake Temple. If you decide to venture onto the temple grounds and cast your eyes up its lofty spires and battlements, the castle-like exterior will reveal a host of astronomical markings: sunstones, moonstones, Earth stones, and even Saturn stones adorn its granite face. Most captivating for me as a teenager—a starry-eyed wannabe scientist and scrupulously obedient Mormon—was the Big Dipper on the western face of the temple’s central tower. The seven stone stars are positioned so Dubhe and Merak, the two end stars of the cup, align toward Polaris, the North Star, just as they do in the night sky—an elegant tethering of Earth to heaven.

The architect of the temple, Truman O. Angell, said he included the Big Dipper to remind Mormons that the lost might find their way by the aid of the priesthood, the power of God given to men to do his work. When I was a teen, my exclusion from this priesthood—as a female—did not consciously bother me. But I did long for knowledge, for understanding, and yes, even for power: the power to heal the sick, to baptize the living, to raise the dead.

I was also excited to find out what exactly happened in the upper echelons of our temples, where many of my faith’s most sacred ordinances and rituals are held. Before they go on full-time church missions or marry in the temple, Mormons are expected to attend a ceremony called the Endowment, where they receive additional spiritual instruction and make covenants with God. Church leaders forbid members to disclose the details of this ceremony outside the temple, so I didn’t know what covenants I was expected to make. However, we were encouraged to learn about the temples, so to prepare, I consumed Hugh Nibley’s 1992 tome Temple and Cosmos. Nibley taught at Brigham Young University and was highly respected in Mormon circles as a scholar of ancient cultures and as a prolific—if esoteric—apologist for Mormonism.

In Temple and Cosmos, I learned that templum originally referred to any consecrated space. A Roman augur, or prophet, would find an open space and, with his staff, scratch an encircled cross into the ground, the urbs quadrata. With this earthy compass, the prophet could establish the precise direction in which prophetic birds flew. He’d wait at the point of origin between the cardo (N/S line) and the decumanus (E/W line), and he’d record when these winged messengers came, or failed to come. He’d then use these signs from heaven to understand the universe and his place in it. Nibley saw this practice as a parallel for modern temple worship, and I was enchanted with the idea. The temple was the faithful Mormon’s urbs quadrata, a place to get my bearings, the ultimate spiritual coordinate system.

Brigham Young, second prophet of the Mormon Church after Joseph Smith, also knew a thing or two about coordinates. An inspired planner, he oriented entire cities around the Salt Lake Temple. One block north of the temple was 100 North, one block east 100 East, and so on. I always knew how far away the temple was. My home in Sandy, Utah, was about 11 blocks east and 110 blocks south, at the foot of Lone Peak. Looking westward across the valley, I could see the Jordan River Temple, the temple where eventually I would promise to give myself to my husband and he would promise to receive me. At night the white glow from its one massive spire acted as a beacon of peace and hope—and a literal beacon for airplanes flying toward the Salt Lake airport.

My best friend, Brent, lived up the street. On Sundays, he made the clock tick a little faster and the hard beige chair seem a little softer as we talked and laughed—quietly—and on weekday mornings he forced me to listen to Counting Crows and Third Eye Blind as we drove to high school. We competed fiercely for the top grades in our classes, and he usually beat me. I especially appreciated his friendship because it was difficult for me to connect with other girls in my neighborhood/church/school, whose primary focus seemed to be attracting boys and preparing for marriage and families. But who wanted to talk cosmetics when you could talk about the cosmos? What are boys to black holes? If only God could tell me what lay beyond the event horizon! As I studied The Book of Abraham, a text Joseph Smith said he had translated from ancient Egyptian papyri, I grew wistful. Why couldn’t the Almighty give me a vision like he’d given Abraham, a glorious revelation of all God’s creations—including the prophesied existence of a planet named Kolob, a planet “nigh unto the throne of God”? Wasn’t I, like Abraham, a seeker of greater happiness, righteousness, and knowledge? How long would it take before I proved myself worthy? It didn’t seem right that I had to wait so much longer than my male friends and leaders for heavenly power, knowledge, and connection, just because I was female.

I wrote page after page—hundreds of pages—in my scripture journals. I often copied scriptures like the monks of old, as if doing so would cause new meaning to spring from the words. At the same time, I read Stephen Hawking’s A Brief History of Time and other science books whose vocabulary captivated me: accretion disk, Schwarzschild radius, singularity. In class, a friend called me a “space dork” for passionately describing this new information about the universe; after that, I tried to curb my enthusiasm in public. But privately, as Mark Twain once wrote in a letter, I yelped astronomy like a sun dog and pawed Ursa Major and other constellations. My neighborhood seemed small for my ambitions, and I began to chafe under rigid gender expectations.

Still, science conveniently seemed to confirm many of my religious beliefs. When new studies showed that beams of light could physically move small particles of matter, I considered it “proof” that Joseph Smith’s many heavenly visitors, who were often described as arriving in glowing pillars of light, knew how to ride the light rail, too. (Among these visitors were Adam, Abraham, Moses, Elijah, and Elias from the Old Testament; Peter, James, John, and Paul from the New Testament; Nephi, Mormon, Alma, and Moroni from the Book of Mormon; and, in the 1820 vision that started it all, God the Father and Jesus Christ.) My Sunday School teacher, a chemist, once said, “Of course Jesus could walk on water! He knew how to manipulate surface tension. If he wanted to, he could walk through walls by rearranging the empty space in atoms.” In 1992, scientists detected the pulsar Lich; it was not the first pulsar ever discovered, but it was the first observed instance of Earth-sized exoplanets orbiting another star. Maybe we weren’t crazy after all for believing in the planet Kolob or believing that God would eventually give to the righteous, as gods themselves in the afterlife, the power to create their own stars and planets. The first planet I would create, I decided one Sunday, would have variable gravity so I could hike up the highest mountain, throw myself off the top, and float gently back to the ground. I didn’t see my projected ascension to godhood and the creation of these new worlds as greedy, blasphemous, or delusional; I saw it as the natural birthright of God’s children, like a son inheriting his father’s business. It was a promise extended to anyone willing to come unto Christ—even women and (after 1978) anyone of any skin color.

Science and religion went hand in hand in many other ways. One of Joseph Smith’s revelations said that the elements are eternal, which meant Mormons had no quarrel with the law of conservation of energy and generally rejected the ex nihilo creation doctrine many other Christians believed. (We were flexible on the definition of a “day,” too, in the creation story, so the accepted geological age of the Earth, as defined by isotope-studying geologists, never clashed with Genesis; seven “days” might mean 4.5 billion years.) Neither did Mormons object to a universe filled with increasing disorder, as defined by the second law of thermodynamics, which says that any ordered system tends to dissolve into chaos over time. Hugh Nibley testified in Temple and Cosmos that it was only through Christ’s suffering, death, and resurrection that we could ultimately be saved from this degenerative process of entropy. God was the Creator, but he had to live by his own laws, too, so the idea of science opposing our religion seemed laughable.

And if non-Mormon archeologists hadn’t found incontrovertible evidence proving that the Book of Mormon was a true record from ancient American inhabitants, that was okay—maybe the archeologists were looking in the wrong places, or maybe God wanted us to live by faith and not evidence. The Book of Mormon itself contained multiple warnings for those who questioned God and demanded proof of gospel truths. In one epic confrontation, Korihor, an anti-Christ, goads the prophet Alma:

And now Korihor said unto Alma: If thou wilt show me a sign, that I may be convinced that there is a God, yea, show unto me that he hath power, and then will I be convinced of the truth of thy words.

But Alma said unto him: Thou hast had signs enough; will ye tempt your God? Will ye say, Show unto me a sign, when ye have the testimony of all these thy brethren, and also all the holy prophets? The scriptures are laid before thee, yea, and all things denote there is a God; yea, even the earth, and all things that are upon the face of it, yea, and its motion, yea, and also all the planets which move in their regular form do witness that there is a Supreme Creator. (Alma 30:43–44)

The very grandeur and complexity of the cosmos—despite its degenerative and destructive nature—bore witness of God’s power, before I had ever heard anything about teleological arguments, watchmakers, or David Hume’s Dialogues Concerning Natural Religion. Any argument against the existence of God meant that someone was looking for trouble and an excuse to sin. Doubt was the foil of faith, sent from the devil to weaken and confuse us. Already struck mute by Alma’s God-given power, Korihor goes begging for food and is trampled to death by a random throng of Zoramites. The lesson is clear: those who doubt, look out.

But I had few doubts in those days. (Too few, I think, which made my eventual disillusionment even more painful.) When my faith was challenged with new scientific information—new for me, anyway—Mormonism acted like the semipermeable membrane of a cell: the new information was either allowed to pass and assimilate into my worldview, or it was rejected as untrue and banned from being investigated further. The theory of human evolution? Yes, it could enter, albeit with trouble, since the Church had no official position on evolution but still culturally claimed white-skinned Adam and Eve as the first common ancestors of all humans. And what about the assertion that homosexuality occurs naturally in humans and is not inherently evil? No, not a chance; the leaders had made themselves clear on that point, although they have recently softened this stance in the wake of so many teen suicides. When I rejected facts because of my faith, I brain-tagged the information with the extremely useful title of “anti-Mormon,” a label liberally applied to things or people I didn’t like or didn’t understand.

Such a label could easily be applied to people in other religions, too. One day, outside a Christian convention downtown, someone handed me a pamphlet. It was the first of many “anti-Mormon” pamphlets I would receive from people trying to save me from my religion. On this particular pamphlet was an image of Jesus, his eyes replaced by flames, and beneath it was the word sinner. I did not recognize this angry Jesus. Why should this fire-eyed god be upset with me if I were trying my best to follow his teachings? The Jesus I knew was based on Greg Olsen’s calm, quiet paintings: the Savior wore soft robes and expressions and held lambs as gently as newborns. In church movies, Jesus sat and laughed with children and coaxed large monarch butterflies to land on his shoulder. The only time my Jekyll Jesus went Hyde was when people started commercializing his temple.

I threw the pamphlet away without opening it.

In the summer of 2001, just before my senior year of high school, the Utah Transit Authority had almost finished building a second light-rail line, the Red Line, out to the University of Utah. A good bus route was still in place, however. Descending the steep bus steps, I marched into the university’s cosmic ray research department and, with all the confidence my seventeen-year-old self could muster, told the program manager why he should hire me as a summer intern. I suspect he was more amused than convinced, but he hired me on the spot. Day one, on the conference room whiteboard, he began an overview of the project and my assignment.

“Cosmic rays aren’t actually rays—”

“They’re tiny particles that hit the Earth,” I interjected, wanting so badly to please.

“Very good,” he said. “We’ll just have a quick review, then.” He proceeded to bombard me with information as I wrote furiously in the large brown notebook he had given me: ultra-high-energy protons and iron nuclei, extensive air shower arrays, Cherenkov radiation, pions with neutral charges decaying to photons, isotropic scattering, atmospheric fluorescence detectors, photomultiplier tubes, photoelectric effect, GZK cutoff, the 1991 Oh-My-God particle (Oh-My-Gosh particle, I autocorrected in my head).

I struggled to keep up, but I was filled with awe. These were the mysteries of the universe, unfolding before me! I was at the forefront of astrophysics research! The manager gave me a place in the Undergraduate Slum, a largish cubicle with a scattering of computers, programming books, half-empty coffee cups, and half-groggy interns. My task? Create a set of computer programs that would convert one geodetic, or Earth-based, coordinate system to another. The end goal of Geolib, as we called the program, was to help full-time cosmic ray researchers more easily use our data to determine where ultra-high-energy cosmic rays came from. We—I liked saying “we”—had theories that they came from supernovae, magnetic variable stars, quasars, or active galactic nuclei, the powerful radiation surrounding the supermassive black holes at the center of galaxies. Here was my big chance to connect heaven and Earth through the scientific templum.

Stan, my direct supervisor, took me outside later that day with a surveying unit, a plumb bob, and a GPS device. In geodesy and cartography, he told me, a fixed reference point is called a datum. I squinted at him in the bright sun, trying to squint knowingly. Azimuth and elevation; an east, north, up vector system; GPS coordinates; an XYZ coordinate system with an origin placed anywhere you wanted, augur-style—these were all geodetic datums I had to connect mathematically in my conversion program.

“Which coordinate system do we most need for the cosmic ray data?” I asked him, pretending to know what I was talking about. Stan reached up to readjust his giant tinted glasses.

“Depends on what you want to measure.”

Creating Geolib was not easy, but I did it. In my brown notebook, I drew many oblate ellipsoids skewered by various sets of axis lines without fully understanding what I was seeing. I actually used the trigonometry and pre-calculus I had learned in school. I tried to imagine what the Earth would look like as a geoid—a more accurate model of our bumpy, uneven planet—so we could measure surface elevations more precisely. I fell asleep on my keyboard trying to learn how to create an array of pointers in the C programming language. I ate an obscene number of Nutty Buddy bars. I asked the other undergraduates for help with partial differential equations and was frustrated by my inability to understand the math.

Whenever I’d banged my head against the mathematical wall for more than a few hours, I’d take my calculations to Stan’s cubicle. His desk was overflowing, mad scientist-like, with papers, folders, mugs with various levels of dark liquid, multiple computers, and assorted gizmos and gadgets, including a high-tech laser photometer. Stan was a conundrum: he’d never gotten a college degree, but he had worked for decades in astrophysics research for a reputable university; he was atheist, but he loved living in Utah. Sometimes we’d get sidetracked from our Geolib diagrams by intense dialogues about religion. I’d rib him about drinking coffee—forbidden to Mormons—and he’d retort that I was supposed to eat meat only in times of winter or famine, or didn’t I know my own Word of Wisdom scriptures? It turned out that Stan was technically one of those ex-Mormons I had been taught to fear, but he was not like any kind of anti-Christ Korihor I had pictured: Stan had refused to attend church at the ripe old age of eight, when he felt pressured to proclaim in front of the entire congregation that he knew the Church was true. He didn’t know, he said. He could believe, he could even want to believe, but he couldn’t know.

“But there are many ways of knowing something’s true,” I countered. I talked about how God sends powerful experiences and feelings to those who ask in faith. This is great missionary experience, I inwardly crowed, spiritually patting myself on the back.

“I thought you weren’t supposed to seek for signs,” Stan responded. “I thought you were supposed to live by faith.”

“Well, the scriptures tell us to search for truth, and God’s willing to open the door if we knock. But the more we know, the more we’re responsible for, so it’s really an act of mercy if he withholds something we’re not ready for. Milk before meat, and all that.”

“Whatever you say!” Stan replied cheerfully, lifting his ever-present coffee mug to his lips. “I’m vegan, so I don’t want milk or meat. I’ll stick with coffee, thanks.”

“You’re so frustrating, Stan!”

He just grinned. “I think you mean Sa-tan. Now, get back to work. You’re going to kick ass in college if you keep working this hard.”

A more pleasant apostate you will never, ever meet.

On my eighteenth birthday, one of my little sisters came clattering down the stairs to tell me that Paul, a boy from my physics class, was at the front door. I had begun to consider that black holes and boys were not mutually exclusive topics of interest after all, and I had developed a crush on him. Paul delivered two gifts: a burned CD of the NeverEnding Story soundtrack (Mormons love their cult classics) and a book by Richard Ingebretsen titled Joseph Smith and Modern Astronomy. I still have the book: the pages fall out no matter how lightly I try to turn them.

Ingebretsen was part of a cadre of Mormon science lovers who wrote books describing their grand unified theories of science and religion. These books were never official publications of the Church, but they still pervaded our discourse and occupied hallowed spaces on our bookshelves. “With his mind,” Ingebretsen decrees on page one, “Albert Einstein reasoned what Abraham had been told by God thousands of years before. It took science over 3,500 years and the superb intellect of Einstein to re-discover what Abraham knew.” I gobbled it up.

A few weeks later Paul kissed me, and a few months after that he broke up with me so he could focus on preparing for his mission, as good Mormon boys were supposed to do. We remained friends, but the incident made me feel as if I were a wicked distraction from his more important priesthood responsibilities. Black holes were safer and less mysterious than boys, I decided, and I threw myself at my college textbooks.

At the age of twenty-one, just before setting off on my own mission, I finally attended the Endowment session in the Jordan River Temple. More impatient than nervous, I entered the Endowment room, which looked like a small theater containing enough self-folding seats for least forty people. Women were directed to sit to the left of the central aisle, men to the right. I sat in the front row on a chair cushion the color of desert sage, which matched the floor-to-ceiling curtain at the front of the room. My mother, settling in beside me, was dressed as I was, in a long-sleeved white dress and white slippers. I wiggled my toes in the slippers; they made me feel like I had satin clouds attached to my feet.

A portly man dressed in a white suit stood calmly but unsmilingly at a simple altar in front of the enormous curtain. When everyone was settled, he pressed a few buttons to start the audio recording of the presentation. After a deep masculine voice announced the importance of the ceremony, the lights dimmed and a large screen at the front of the room descended. The video presentation of the creation story from Genesis was so beautiful I wept. Later, I would experience the same awe as I watched the new Cosmos series with Neil deGrasse Tyson and BBC’s Human Planet and Planet Earth II documentaries. They all shared sweeping landscapes, close-ups of flowers and animals, and music that created visceral physical responses down my spine and across my skin: a divine feeling, whether sent by a divinity or not. The Earth we have, lumpy and asymmetrical though it may be, is ours, the pale blue dot over which we can be better stewards.

A sense of overwhelming reverence is something both science and religion can provide. Both proffer to their acolytes the notion of the sublime, as preached by the Romantic poets. An eighteenth-century German philosopher and gardening enthusiast, Christian Hirschfeld, defined the sublime as seeing our own potential in the grandeur of nature and its many landscapes, which are outward symbols of our many inward human realities. The poet William Wordsworth considered the sublime to be the mind trying to “grasp at something towards which it can make approaches but which it is incapable of attaining,” a mood where mystery’s burdens and “the heavy and the weary weight / Of all this unintelligible world, / Is lightened.” This is the mood I have felt in singing praises to God, scanning poetry, snuggling with pets and people, studying planets. In his book Truth and Beauty, Subrahmanyan Chandrasekhar, an Indian American astrophysicist who won the 1983 Nobel Prize for Physics for his work on the physical configuration and evolution of stars, also wrote of the human need to search for the sublime:

This “shuddering before the beautiful,” this incredible fact that a discovery motivated by a search after the beautiful in mathematics should find its exact replica in Nature, persuades me to say that beauty is that to which the human mind responds at its deepest and most profound.

The subsequent parts of the Endowment ceremony were less awe-inspiring for me. Painful childbirth and patriarchy (Genesis 3:16) seemed a heavy price for Eve’s sin of eating a piece of fruit in search of knowledge. Hand in hand, Adam and Eve were expelled from the Garden of Eden into a lone and dreary world with the promise that if they were obedient, they could return to God’s presence.

The white screen ascended back into its slot in the ceiling, and we were asked to put on special temple attire over our clothes, each item signifying spiritual progress toward God in some way. It was strange, but I clung to what my grandma had said the day she purchased my temple clothes for me: whenever I donned the symbolic temple clothing, she said, she wanted me to feel wrapped in God’s love and her love. When prompted by the masculine voice, I bowed my head and covenanted to be faithful to my church and its teachings. If I broke those covenants, I risked losing my place with my family in the afterlife. We were then allowed to pass by a curtain into the Celestial Room, which contained a glorious three-tiered chandelier and stately chairs and couches fresh out of a high-end furniture magazine. Copies of the scriptures, tissue boxes, and impressive flower arrangements stood on ornate end tables. We were encouraged to reflect on the ceremony, to commune with God in private prayer, and to whisper if we needed to speak to others. I felt relieved we could sit by the male members of our families again.

Slightly disappointed by the ceremony but still wanting to share the sublimity I had felt, I set off on my mission. As a Spanish-speaking missionary in Toronto, I often talked to passengers on subways and buses: a captive audience. It was my first step into the wider world, and how wide it was! On just one bus ride I’d talk to immigrants from China, Peru, Ghana, Ukraine, Mexico, and Afghanistan. Our mission president asked us to visit Spanish-speaking church members who had fallen away and invite them back to the fold, and in our missionary lessons with them in their homes, I often used an analogy: if a train is heading to the place you want to go, and a fellow passenger steps on your toe, are you going to get off in a huff and deny yourself your destination? If the Mormon Church is the train, heading toward eternal happiness, why would you ever disembark?

I had many faith-affirming experiences, but some moments were terribly destabilizing, the kind of feeling you get when your subway car breaks down in the tunnel and the lights flicker on and off. Late one afternoon, my mission companion and I were out knocking doors through a neighborhood of run-down townhouses. I had fasted all day in the summer heat to be worthy enough to find someone who would listen to us, and I was weak from hunger and thirst. We noticed a man in a black turban walking by; we gave him a card for our free English class but did not try to engage him in conversation. We had just started talking to some teenagers in a driveway when a woman came barreling out of the house, screaming that she was a proper Christian and ordering us off her property.

We apologized and immediately crossed to the other side of the street. Shaken, in tears, I was trying to compose myself when the man in the turban came back and said, in excellent English, that he had seen what happened. He kindly invited us to dinner with his family. His smiling wife greeted us at the door and introduced us to their young son. The small apartment boasted little fancy furniture but was clean. The family had emigrated from the Middle East, and together, at a low table, we ate basmati rice, vegetables, and fruit. They were not interested in our religion, but their kindness demonstrated a principle that religion teaches better than science: to show goodness and mercy where none is required. The son shyly showed us his detailed Basmalah calligraphy, which formed an image of a child praying. As we thanked them for their generosity at the door, the boy gave me the drawing.

The incident troubled me: of course I knew there was goodness elsewhere in the world, outside Utah, outside Mormonism, but here was a family who didn’t need what I was offering, a family—and the thought felt blasphemous—who didn’t need saving. Throughout all the years that followed—returning from my mission, kneeling at the altar with my husband, Andrew, in the Jordan River Temple, graduating in English instead of Physics, editing science books and articles, giving birth to my son (all the while cursing Eve’s curse), moving from country to country for Andrew’s work—I kept the boy’s picture.

In thinking of all the people I have met, I find it difficult to lock down any philosophical axiom concerning science and religion. I can do so only from my very particular—some may consider it singular—point of view and set of circumstances. The more stories we hear, however, the more I believe we will begin to see guiding constellations in the metaphysical sky.

I have recently been fascinated by Isaac Newton and his particular circumstances. Abandoned by his mother at the behest of his new stepfather, Newton spent hours alone on his grandmother’s farm creating makeshift sundials. In his solitude, as James Gleick said, Newton made knowledge “a thing of substance: quantitative and exact.” Newton’s epitaph, written by Alexander Pope, is most fitting:

Nature and Nature’s Laws lay hid in Night:

God said, “Let Newton be!” and All was Light.

When he was nineteen, Newton meticulously catalogued his sins, one of which was “Wishing death and hoping it to some.” Despite his sins, he believed he had been chosen by God to interpret the Bible, so he spent more time trying to find hidden meaning in the scriptures than trying to decipher the physical universe. One of my science writing students this past year argued that Newton would have accomplished much more had he not been so isolated in his religious pride. To play devil’s—or maybe heaven’s?—advocate, I countered with the idea that maybe Newton’s religious beliefs had actually given him the drive and focus to discover the laws of nature. We can only conjecture.

We may even find that there need be no quarrel at all between some aspects previously regarded as sore points between science and religion. As Alan Guth, an American theoretical cosmologist, said, “The big bang theory is not really a theory of a bang at all. It is really only a theory of the aftermath of a bang …. But the standard big bang theory says nothing about what banged, why it banged, or what happened before it banged.” Georges Lemaître, who first proposed the idea of the Big Bang, was not only an astronomer and a professor of physics but also a Catholic priest, and in the past few years, Pope Francis has openly supported the Big Bang theory and evolution, as well as the need to combat climate change. Our primary war is not against science or religion; it is against the forces of nature, including human nature, that diminish our capacity to feel the sublime in its many incarnations.

I had the chance to visit Hugh Nibley himself shortly before he died in February 2005. He was lying on a bed in his living room, propped up by pillows. Books lay all around him, on his bed and in stacks on the floor. My old friend Paul, who accompanied me, asked Nibley if the Mormon Church was true. Nibley’s answer, on his deathbed, was the same phrase Mormons use to describe their belief in the Christian Bible: “As far as it is translated correctly.” As I look back now, Nibley’s riddle-like answer seems laced with sadness, as if the birds of the heavens were not as reliable as he wanted them to be.

In 2013, my little family of three moved to Montreal for Andrew’s work, and we were quickly and lovingly integrated into a wonderful congregation. But two decades of studying Mormon doctrine and how it was practiced began to cause friction between my desire to be honest and my desire to by loyal. After investing so much in Mormonism, it was uncomfortable for me to realize how many members and leaders of the Church had, Newton-like, taken their personal translations of the scriptures and were preaching them as doctrine over the pulpit. I was also frustrated by the impotency I felt as a female leader in the Church. After practicing job interviews with the young women in my congregation, I was chastised for not focusing enough on teaching the girls to become dutiful wives and mothers. I was willing to stay in the church and fight this gender war, however, and I began meeting with my bishop and other male leaders to try to explain how benevolent sexism was still sexism, and still harmful. They listened patiently but told me they could change nothing.

That July, in a small town three hours east of our apartment, a train accident caused massive explosions, killing forty-seven people. The news unfurled images of giant plumes of black smoke, billowing mushroom balls of flame, and people shouting, “Mon Dieu! Mon Dieu!” as those behind the camera alternately ran toward the inferno for a better view and ran away in terror. We learned that the engineer had parked the train, seventy-four cars long and carrying millions of liters of petroleum crude oil, on an incline in Nantes, seven miles from Lac-Mégantic. Unfortunately, he did not set enough hand brakes on the cars, and the gravity of the incline overcame the friction of the brakes. The unattended train picked up speed as it went, and finally derailed at a curve in Lac-Mégantic. About half of the buildings in the area were destroyed, and nearly all the remaining buildings had to be demolished because of petroleum contamination.

Only months later, my own spiritual engine set out on a crash course to the center of my soul. My concerns about gender inequality, a God who sanctioned polygamy but not homosexuality, and doctrinal inconsistencies in our scriptures became more important than my fear of spiritual and social consequences. The last hand brake broke when I read In Sacred Loneliness: The Plural Wives of Joseph Smith, by Mormon historian Todd Compton, which derailed my faith in Joseph Smith altogether and—although I neither expected nor desired this outcome—my faith in God.

In January 2014, I visited the Palmyra Temple in New York with my husband. The temple overlooks the Sacred Grove, where Joseph Smith said he saw God. Inside the Celestial Room, sitting on the white couches under a white chandelier, as light streamed in from the glass windows, Andrew and I decided to leave the faith. We left the temple hand in hand, ready to face the lone and dreary world together.

The sudden vertigo caused by this decision was almost Copernican in nature for me—that is, leaving Mormonism was like believing that Earth is the center of the universe, then suddenly discovering it is an uneven chunk of rock rotating, as physicist Richard Feynman said, like a spit in front of a great fire. Comforting certainty has been replaced with ambiguity and nuance. Some of my friends and family believe I’m lost in my intellectual pride, deceived by the devil, and destined to be punished for seeking out the fruit of forbidden knowledge. When I called my dear high school friend Brent on the phone, fearful he had also shut his heart against me, we talked earnestly for five hours, and I felt nothing but compassion from him; he then showed it by flying out to Montreal with his wife to visit us. Other family members have also loved us through the whole ordeal, as have friends who revealed they had left the faith long ago but hadn’t told anyone for fear of social retribution. I am fortunate to still feel wrapped in my grandmother’s love. It takes time to put out the fires, clean up the mess, and rebuild, but we are doing it.

In July 2015, I e-mailed Stan to say I was flying in to give a presentation at the University of Utah on my new anthology of essays by twelve Mormon—and formerly Mormon—women. I told Stan I’d love to see him while I was there, and he was one of the first in line at our book signing. After the presentation, I asked to see my old cubicle in the Undergraduate Slum, which hadn’t changed much in a decade. Over cups of coffee and meatless salads for lunch, Stan told me that Geolib was still being used by researchers to convert one set of coordinates to another, and that the programs had been very helpful to them through the years. Something settled in me when I heard this. I wasn’t a world-renowned scientist, but I had contributed. Now, teaching astronomy and science writing to students in Nicaragua, I find great meaning in sharing the current knowledge we have about the cosmos. The school roof, where we host our star parties, has become my new templum. As I align the crosshairs of our school telescope on Jupiter, or Saturn, or Venus, or other planets named after gods, I feel tethered to heaven in a new way. That optical “Engine” of the astronomers, as John Donne calls it, is my students’ conduit to the heavens, an axis mundi as meaningful and as centering as the pagan Callanish stones of Scotland, a Mount Meru mandala from China, the unit circle on a Cartesian plane, or a Christian cruciform halo.

The fact that Polaris will not always be our North Star seems deeply symbolic to me now. Because of axial precession—the slight wobble of Earth’s axis—over the next thousand years, Polaris will gradually be dethroned, and Gamma Cephei, a star in the constellation Cepheus, will take its place as the North Star. The Big Dipper on the Salt Lake Temple will look strange and out of season. Constellations will change. The Milky Way and the Andromeda galaxies will merge. The firmament, in both the physical and metaphysical sense, is not firm after all.

Emerson once said, “There are no fixtures in nature. The universe is fluid and volatile. Permanence is but a word of degrees.” I find a strange stability in the idea that nothing is stable or fixed—not the stars, not even the universe itself. As uncomfortable as uncertainty is, it begets a healthy humility and the need to acknowledge margins of error in all our calculations, in all areas of life. Uncertainty can inspire us to keep searching for answers.

I now draw my own urbs quadrata from which to measure and gauge the universe, but birds that have lost their prophetic gifts are nevertheless respected and appreciated. Although Mormonism is no longer my system of orientation, I still love my people, and I applaud and support their belief in the Jesus of lambs and butterflies. The world will be a better place for it. Despite our theological differences, we are aligned in purpose as we train our eyes on the heavens—to seek out the sublime, the things we both fear and adore, and to share our shuddering with a world in great need of both humility and inspiration.

Forum – Fall 2017

Are cops on the science beat?

In “The Science Police” (Issues, Summer 2017), Keith Kloor alleges that self-appointed sheriffs in the scientific community are censoring or preventing research showing that the risks from climate change are low or manageable. His complaint draws support from scientific articles that, he claims, suggested that “a main avenue of climate research (natural variability) should be ignored” and that discouraged climate scientists from investigating a recent phenomenon often identified as a “pause” or “slowdown” in the rate of global warming.

We authored those articles, and we stated the exact opposite. Contrary to Kloor’s fabricated claim, we encouraged research on natural climate variability, including the recent alleged slowdown in warming.

The idea that warming has paused or stopped originated with contrarian opinion articles in the media—rather than in the scientific literature—but it was picked up by researchers and assumed the status of a significant scientific phenomenon. To date, more than 225 articles have been published on the issue.

When the recent slowing in warming rate is subjected to thorough statistical analysis, a number of articles—including ours—concluded that the data do not justify the notion of a pause or hiatus in global warming. Warming clearly slowed at some point early during the twenty-first century, in the same way that warming accelerated at other times, such as during the past four to five years, but it never stopped or paused in a statistically meaningful sense. Thus we argued that the terms “pause” and “hiatus” were misleading.

That said, it would be impossible to draw any conclusions about a pause or its absence without research on the nature and causes of global temperature fluctuations. That is why one of our articles cited by Kloor contained an entire section titled “the merits of research on the pause.” This section noted that “The body of work on fluctuations in warming rate has clearly contributed to our understanding of decadal variations in climate.” We went on to specify some achievements of that research.

In another publication to which Kloor refers, we stated unambiguously that “Our conclusion does not imply that research aimed at addressing the causes underlying short-term fluctuations in the warming trend is invalid or unnecessary. On the contrary, it is a legitimate and fruitful area of research.”

We were, however, concerned about the way the slowdown in warming gained a foothold in the scientific literature under the label “pause” or “hiatus,” without much statistical support. We argued that this might have arisen as a consequence of the “seepage” of climate denial into the scientific community. That is, we argued that although scientists are trained in dealing with uncertainty, there are several psychological reasons why they might nonetheless be susceptible to contrarian argumentation, even when scientists are rebutting those arguments. For example, the constant mention of a “pause” in political and public discourse may cause scientists to adopt the term even if its meaning is either ill-defined or inappropriate, and even if the notion has little statistical support.

Far from discouraging scientists from pursuing any particular line of research, our work provided pointers to assist scientists in avoiding rhetorical traps set by politicians and political operatives in the future.

As we have shown elsewhere, contrarian discourse about climate science is incoherent and misleading, and suffused with rhetorical tools aimed at disparaging climate science and climate scientists. Kloor’s fanciful specter of a “science police” was partly based on claims about our work that were reversals of what we actually said. The science police concocted in this way is thus another rhetorical tool to discredit those who defend the boundary between science and pseudoscience or politics. The science police label facilitates the kind of seepage we recently observed relating to the “pause.” Why a respected journal such as Issues in Science and Technology chose to publish such misrepresentation without fact-checking is a topic for further discussion.

Stephan Lewandowsky

University of Bristol

Bristol, United Kingdom

James S. Risbey

CSIRO Oceans and Atmosphere

Hobart, Australia

Naomi Oreskes

Harvard University

Cambridge, Massachusetts

Keith Kloor raises important concerns, but is not able to arrive at a clear conclusion about what, if anything, ought to be done about those problems. Though he presents a handful of alarming anecdotes, he cannot say whether these represent the exception or the rule, and it makes a difference whether scientific discourse mostly works, with a few glaring exceptions, or is pervasively broken.

Real life does not distinguish as clearly as Kloor attempts to do between scientific and ideological considerations. The article by Roger Pielke in FiveThirtyEight, which Kloor discusses at length, certainly attracted ideological responses, but it was also widely criticized on scientific grounds. Pielke and his critics, such as William Nordhaus, have published arguments for and against in peer-reviewed journals. For those who sincerely believe that someone’s methods are deeply flawed and his or her conclusions factually incorrect, it is not an act of censorship, but of responsible peer review or journalism to not print that work. To do otherwise risks contributing to the phenomenon Maxwell Boykoff and Jules Boykoff call “Balance as Bias.”

However, drawing a bright line between responsible policing for accuracy and irresponsible policing for ideological purity is often impossible. In The Fifth Branch, Sheila Jasanoff distinguishes “research science” (narrowly disciplinary with strong consensus on methods) and “regulatory science” (intrinsically interdisciplinary, with experts holding diverse views about the soundness of methods and also strong political views). In regulatory science, such as research on climate change, what some see as purely scientific judgment that certain work is shoddy may seem politicized censorship to others.

In 1980, Alan Manne and Richard Richels found that expert engineers’ political views about nuclear energy strongly influenced their scientific judgments about apparently unrelated factual questions.

In Science, Truth, and Democracy, Philip Kitcher considers whether some scientific questions, such as hereditary differences in intelligence, ought not to be pursued because of the potential for even solid empirical results to be misused politically. Kitcher argues that certain research ought not to be done, if it is likely to cause more harm—through political abuse of its results—than good. However, he also recognizes that censorship would likely cause even more harm than the research. He concludes that the question whether to undertake a potentially politically dangerous line of research should rest with the conscience of the researcher and not with external censors, however well-intentioned.

It is important to keep outright falsehoods out of journalism and the scientific literature. Creationism and fear-mongering about vaccine safety do not deserve equal time with biological and medical science. But in matters of regulatory science, where there is not a clear consensus on methods and where it is impossible to strictly separate factual judgments from political ones, the literature on science in policy offers strong support for keeping discourse open and free, even though it may become heated. But it also calls on individual scientists to consider how the results of their research and their public statements about it are likely to be used.

Jonathan Gilligan

Associate Professor of Earth and Environmental Sciences

Associate Professor of Civil and Environmental Engineering

Department of Earth and Environmental Sciences

Vanderbilt University

Imagine you’re a postdoctoral researcher in a nonempirical discipline, and you draft a paper with conclusions contrary to the dominant normative beliefs of most of your field’s senior scholars. Despite your confidence in the article’s rigor, you may be apprehensive about submitting, lest it get mired in critical peer review or—even worse—you develop a reputation harmful to your career. The safe course would be to remain within the boundaries of the accepted range of discourse, not submitting the draft.

The evaluation—actual or merely feared—of scholarship based in part on its congruency with prevalent assumptions and politics has led to accusations that some nonempirical disciplines are vulnerable to groupthink and cycles of fashionable leading theories. This is troubling, both because of the increasing political homogeneity of these fields’ members and because output in these subjects can improve the understanding of society

Researchers in empirical fields may believe that their research is not so vulnerable. Of course, empirical research is generated and assessed by humans with prejudices and desires, both conscious and unconscious, and has never been fully immune to biases and recalcitrant dominant paradigms. Yet political debates can and sometimes do infringe on scientific processes in ways that are more systematic and widespread, and carry greater risks, than the occasional biases of individual researchers and reviewers.

Keith Kloor draws attention to such a phenomenon in the natural sciences. He describes how the vigilant monitoring of science for its political implications is strongest in “highly charged issues,” especially climate change. One way in which such policing of the climate change discourse is most evident is the expanding application of the “climate change denier” label. In recent years, the smear has been levied at a growing cohort: from those who are appropriately skeptical of some conclusions within climate science; to those who emphasize the high expected costs of dramatically abating greenhouse gas emissions; and to those who note that nuclear power is an essential, reliable, scalable, zero-carbon energy source.

If Kloor is right—and I believe he is—about this Manichean, with-us-or-against-us approach, then papers in line with peers’ political views will be gently reviewed, whereas those outside of them will be more rigorously scrutinized. The obvious consequence will be lower quality scientific output. In the case of climate change, this will likely also result in suboptimal decision making and policies.

The less obvious risks are political. Those who have been labeled deniers may find a more conducive audience for their conclusions among those who fully reject anthropogenic climate change, strengthening the latter constituency. Those who oppose policies to prevent climate change may take advantage of climate scientists’ internecine fractures. Those who approach climate change as novices may be discouraged by the dogmatism.

Given the stakes to humans and the environment, I believe that scientists are obligated to develop lines of inquiry, conduct research, submit articles, and conduct peer review as free as possible from ideological boundaries.

Jesse Reynolds

Postdoctoral Researcher

Utrecht Centre for Water, Oceans and Sustainability Law

Utrecht University

Publication blues

In “Publish and Perish” (Issues, Summer 2017), Richard Harris has performed a valuable service by exploring some of the problems currently afflicting science. He identifies academic pressures to publish in high-impact journals as an important driver of the so-called reproducibility crisis, a particular concern in the life sciences. We agree with this assessment and add that these issues reflect problems deep in the culture of science.

Today, a junior scientist is more likely to have a promising career if he or she has published in a high-impact journal a paper in which the conclusions are wrong (provided that the paper is not retracted) than another scientist who has published a more rigorous study in a lower-impact specialty journal. The problem lies in the economy of contemporary science with rewards that are out of sync with its norms. The norms include the 3Rs: rigor, reproducibility, and responsibility. However, the current reward system places greater value on publishing venue, impact, and flashy claims. The dissonance between norms and rewards creates vulnerabilities in the scientific literature.

In recent years we have documented that impact is not equivalent to scientific importance. As Harris observes, some of the highest impact journals have had to retract published papers as a result of research misconduct. When grant-review and academic-promotion committees pay more attention to the publication venue of a scientific finding than to the content and rigor of the research, they uphold the incentives for shoddy, sloppy, and fraudulent work. This flawed system is further encouraged and maintained by top laboratories that publish in high-impact journals and benefit from the existing reward system while creating a “tragedy of the common” that forces all scientists to participate in an economy where few can succeed. The perverse incentives created by this process threaten the integrity of science. A culture change is required to align the traditional values of science with its reward system.

Nevertheless, the problems of science should be viewed in perspective. Although we agree that reforms are needed and have suggested many steps that can make science more rigorous and reproducible, we would emphasize that science still progresses even though some individual studies may be unsound. The overwhelming majority of scientists go to work each day determined to do their best. Science has improved our understanding of virtually every aspect of the natural world. Technology continues its inexorable advance. This is because, given sufficient resources, the scientific community can test preliminary discoveries and affirm or refute them, building upon the ones that turn out to be robust. The ultimate success of the scientific method is sometimes lost in the hand-wringing about poor reproducibility. Scientists test each new brick as it is received, throwing out the defective ones and building upon the solid ones. The ever-growing edifice of science is therefore sturdy and continually reinforced by countless confirmations. Although there is no question that science can be made to work better, let us not forget that science still works.

Arturo Casadevall

Chair, Department of Molecular Microbiology and Immunology

Johns Hopkins Bloomberg School of Public Health

Ferric C. Fang

Professor of Laboratory Medicine and Microbiology

University of Washington

The case that Richard Harris presents in his article and in his damaging book suffers from three significant problems.

First, the wrong question is being asked referring to statistics about how many results are not ultimately supportable. It’s like asking how many businesses fail versus the number that succeed—far more fail, of course. Does that mean people shouldn’t start new businesses? Does that mean that there must be better ways to start businesses? Do we expect to have a foolproof, completely replicable method of starting a business? Of course not. Science is a process riddled with failure; it is not just a step along the way to eventual success, but a critical part of the process. Science would come to a dead stop if we insisted on making it more efficient. Messy is what it is, and messy is what makes it successful. That’s because it’s about what we don’t know, remember.

Second, those results that turn out to be “wrong” are wrong only in the sense that they can’t be replicated. This is a superficial view of scientific results. Scientific data are deemed scientific because they are in principle replicable—that is, they do not require any special powers or secret sauces to work. Do they have to be replicated? Absolutely not. And most scientific results are not replicated: that would be a tremendous waste of time and resources. Many times the results become uninteresting before anyone gets around to replicating them. Or they are superseded by better results in the same area. Often they lead to a more interesting question and the older data are left behind. Often they are more or less correct, but now there are better ways of making measurements. Or the idea was absolutely right, just that the experiment was not the correct one (there is a famous example of this in the exoplanet field). Just counting up scientific results that turned out to be “wrong” is superficial and uninformative.

The third offense, and by far the worst, is the conflation of fraud with failure. For one thing, this is logically wrong: these actions belong to two different categories, one being intentional and criminal, and the other being unintentional and often the result of attempting something difficult. Conflating them leads to unwarranted mistrust of science. Fraud occurs in science at a very low rate and is punished when discovered as the criminal activity that it is, through imprisonment, fines, disbarment, and the like. This has absolutely nothing to do with results that don’t hold up. They are not produced deceitfully, nor are they intended to confuse or misinform. Rather, they are intended to be interim reports and they welcome revision, following the normal process of science. Portraying scientists as no more trustworthy than the tobacco executives who lied to Congress encourages the purveyors of pseudoscience.

The crisis in science, if there is one, is the lack of support and the resources the nation is willing to devote to training and research. All the other “perversities” Harris claims emanate from that one source. This can be fixed by the administrative people he lists at the end of his article—and unfortunately not by any scientist, leading or otherwise. So why is he casting scientists as the perpetrators of bad science?

Stuart Firestein

Former Chair, Department of Biological Sciences

Columbia University

Power of partnerships

In “It’s the Partnership, Stupid” (Issues, Summer 2017), Ben Shneiderman and James Hendler advocate for a new research model that applies evidence-based inquiry to practical problems with vigor equal to that previously reserved for pursuits of basic science. The Center for Information Technology Research in the Interest of Society (CITRIS) and the Banatao Institute at the University of California, where I work, take this premise as an essential element of our mission. Research initiatives in sustainable infrastructures, health, robotics, and civic engagement, along with affiliated laboratories and a start-up accelerator, have given rise not only to successful commercialization of research but also to effective partnerships with industry, government agencies, and the nonprofit sector.

Iterative and incremental development, when ideas are tested and refined through give-and-take with stakeholders throughout the process, results in better outcomes for end users and a greater impact for the inventor or team than when they work in isolation. Whereas conventional attitudes might relegate interaction with partners to resolving tedious details of implementation, the proposed model can present real technological and intellectual challenges that advance the science as well as the solution. Some examples from CITRIS investigators include:

Permeable boundaries between industry and academia have long prevailed in science and engineering fields, often driven by motivated individuals—faculty members serving as consultants, or industrial fellows spending time on campus. Building on these important relationships, institutional leaders can create a more sustainable and productive model by fostering a welcoming environment for collaborations among organizations. Addressing complex problems involving multiple systems and stakeholders will require an interdisciplinary approach, beyond the scope of an individual researcher or single lab.

How can universities encourage such partnerships, or at least reduce some of the friction that currently impedes their adoption? Shneiderman and Hendler provide useful guidelines for partnerships in their “pillars of collaboration,” and they hail signs of culture change among universities in their policies for tenure and promotion and among funding agencies in their calls for proposals. Universities could go further in three ways: recognize faculty for evidence of work that results in products, policies, or processes beyond (or in addition to) academic publications; simplify the framework in research administration, often a confusing thicket of internal regulations, for working with off-campus organizations; and support and develop career paths for research facilitators, specialized project managers who straddle the worlds of academic research, industrial partnerships, and community engagement. As we face increasingly complex global challenges, these steps can maximize the positive impact of collaborative research through the power of partnerships.

Camille Crittenden

Deputy Director, CITRIS and the Banatao Institute

University of California

The notions of societal engagement presented by Ben Shneiderman and James Hendler resonate deeply with public research universities, especially those with land-grant heritage. Collaboration among academia, government, and industry is richly woven into our histories as we conduct research and cultivate a next-generation science, technology, engineering, and mathematics workforce to advance the national interest. Members of the Association of Public and Land-grant Universities are particularly invested in working with local, state, regional, national, and international organizations to address societal needs. We celebrate the more than 50 public universities that have completed a rigorous self-evaluation and benchmarking exercise to earn the association’s designation as an Innovation and Economic Prosperity University, demonstrating their commitment to economic engagement.

Though there are certainly deficiencies in the linear model of basic research to applied research to development, we caution that there is still a vital role for inquiry-based fundamental research to build a foundational knowledge base. Surely, Shneiderman and Hendler would agree there is a need for a good portion of research to follow a theory-driven approach without knowing in advance the potential practical impact. Such research is by no means in conflict with service to society; in fact, many of the most pioneering innovations can trace their roots to fundamental research that was unconstrained by short-term commercialization aims.

Still, the authors offer an important reminder that universities must redouble their societal engagement through research that addresses the challenges of our time. We agree on the need to accelerate the advance of fundamental knowledge and its application to solve real-world problems. Thus, we are delighted to be early participants in the Highly Integrated Basic and Responsive (HIBAR) Research Alliance and intend to promote this contemporary concept of broadening participation among stakeholders to produce more accessible research. HIBAR builds on the strong foundation of earlier work by the National Academies, the American Academy of Arts and Sciences, and others, and calls for the adoption of transdisciplinary, convergence, and Grand Challenge research approaches. This collective effort crucially aims to promote partnerships and to advance academic research with increased societal relevance.

Our association is pleased to work alongside partner organizations to further develop HIBAR. This reaffirms our commitment to confronting societal challenges by conducting research focusing on real-world problem solving and engaging a diverse range of stakeholders. We believe this emerging and evolving effort will prove key to addressing the most vexing issues facing society and will produce a prominent impact moving forward.

Sarah Rovito and Howard Gobstein

Association of Public and Land-grant Universities

Ben Shneiderman and James Hendler provide an excellent description of the power of researcher/practitioner partnerships. They describe how partners should agree up front on goals, budgets, schedules, and decision making, as well as on how to share data, intellectual property, and credit.

They also describe a big problem: academic culture often discourages real-world partnerships. They trace this back to 1945, when presidential adviser Vannevar Bush argued that universities best serve the needs of society by disconnecting from those needs. Bush recommended a one-way sequential process whereby ideas begin within purely curiosity-driven research and gradually acquire usefulness while passing through private-sector laboratories to emerge as better drugs, smarter phones, and so on.

But this model agrees poorly with science history, so Shneiderman and Hendler question the academic culture that arose from it. I share their view, with an added nuance: Bush was not all wrong. His isolation doctrine does protect some important academic freedoms. However, it weakens others. Consider researchers who have the ability and desire to help solve key societal problems. An isolationist culture restricts their academic freedom to do so. In effect, it says, “If your research is useful, you do not really belong in a university.” This problem hurts us all. Can it be fixed?

Like Shneiderman and Hendler, I am optimistic, although I fear they may have underestimated the tenacity of current academic culture. True, there are encouraging signs, but most previous culture improvement efforts have failed, despite promising indicators. We need more than promise: we need a plausible, evidence-based plan for achieving the needed changes.

Fortunately, in recent years well-established principles have been developed for improving culture. Change efforts should be collaborative, first developing shared goals that are clearly defined and measurable. They must then surpass three critical thresholds that are often greatly underestimated: there must be enough skillful effort applied; for a long enough time; and it must be true that once the new normal is achieved, enough people will prefer it.

With this in mind, the Association of Public and Land-grant Universities is building on its legacy of public service and addressing challenges by hosting discussions of academic and societal leaders on this topic. By consensus, they clearly defined and named this research mode “Highly Integrative Basic and Responsive” (HIBAR) research, and various partners in the discussions have now formed the HIBAR Research Alliance to further progress. (The previous letter provides additional information about the alliance.) Research partners in the program combine excellence in both basic research and societal problem solving, through four essential intersections. Together, they seek new academic knowledge and solutions to important problems; link academic research methods with practical creative thinking; include academic experts and nonacademic leaders; and help society faster than basic studies yet beyond business time frames.

Alliance members are working to develop promising change strategies. These will target processes for research training, faculty grant allocation, and career advancement. Today, too often individual achievements in one discipline are valued over positive societal impact, creativity, teamwork, and diversity. We can and must improve this. As we succeed, everyone will benefit.

Lorne Whitehead

Professor, Department of Physics and Astronomy

Special Advisor on Innovation, Entrepreneurship and Research

University of British Columbia

When good intentions backfire

Policy analysts and commentators are fond of pointing out when good intentions can backfire—often for good reason. “The Energy Rebound Battle,” by Ted Nordhaus (Issues, Summer 2017), offers a case in point. While much of the world grapples with finding ways to reduce emissions from the burning of fossil fuels, policies that seek to promote energy efficiency play a central role. But the rebound effect signals a warning. Goods and services become cheaper when they require less energy, and this can stimulate greater demand, supply, or both. As a result, the energy and emissions savings may be less than expected, or may even increase under extreme circumstances.

That the rebound effect exists is not controversial, but there is a wide range of estimates on its magnitude. One take on the literature is that the estimates are smaller when measured more carefully, and larger when more speculative. But it is precisely the more speculative, macroeconomic settings where the potential consequences of the rebound effect are likely to be most important. Whether people leave LED lights on longer than incandescents may be less consequential than how energy efficiency shapes the overall means of production in an economy. The English economist Stanley Jevons raised the important questions back in 1865. Today, research is still needed to get empirical traction on his seemingly paradoxical result.

But there is arguably a more immediate challenge to our understanding of energy efficiency policy. A growing number of studies find wide gaps between the predicted energy savings that come from engineering models and the realized energy savings that arise after adopting new technologies. The rebound effect can explain some of the difference, but the magnitudes are large enough to raise important questions about whether many current efficiency forecasts are overly optimistic. Gaining a better understanding about the potential for such systematic bias is of first-order importance, as it takes place upstream of any subsequent rebound effects.

Finally, it worth noting the underlying reason why the rebound effect is a potential concern when the objective is to reduce emissions. The fact is that promoting energy efficiency is an indirect way to reduce emissions. Policies that instead seek to limit emissions directly or put a price on them (for example, a carbon tax) are not susceptible to rebound effects. In these cases, cost-effective compliance creates an incentive for greater energy efficiency without perverse secondary effects. In reality, politics may explain the focus on energy efficiency as a matter of expediency, but the long-term goal should be to promote a more direct linkage between our policies and objectives.

Matthew J. Kotchen

Professor of Economics

Yale University

Returning from the brink

As Sheila Jasanoff suggests in “Back from the Brink: Truth and Trust in the Public Sphere” (Issues, Summer 2017), we are unlikely to successfully resolve the current crisis in the politics of truth through simple appeals to trust in the authority of science. Not only does the historical record she cites show that controversies about policy-relevant science are a recurrent feature of politics in the United States, but there is also no reason to expect such debates to ever disappear.

Virtually every area of policy making today involves technical expertise, and if one includes the social and behavioral sciences, it is difficult to think of exceptions. Moreover, science controversies rarely concern the most solid and well established “core” of scientific knowledge. Instead, these disputes typically take place near the frontiers of research, where new knowledge and emerging technologies remain under construction and evidence is often incomplete and provisional.

When uncertain science meets controversial policy choices and conflicting values, a simple distinction between facts and values tends to break down. Indeed, setting the evidentiary threshold needed to justify treating a scientific claim as a policy-relevant fact becomes a value-laden decision.

In such a context, appeal to the authority of contested “facts” is a weak form of argument, easily dismissed as grounded in bias. Reaffirming our commitment to democratic values, inclusiveness, principles of open decision making, and basic norms of civil public debate offers a more promising strategy for advancing the goal of producing Jasanoff’s “serviceable truths.” This is especially true if this commitment is coupled to a concerted effort to hold accountable those who violate those norms or enable others to do so.

Stephen Hilgartner

Professor of Science & Technology Studies

Cornell University

Eyes on AI

In “Should Artificial Intelligence Be Regulated?” (Issues, Summer 2017), Amitai Etzioni and Oren Etzioni focus on three issues in the public eye: existential risks, lethal autonomous weapons, and the decimation of jobs. But their discussion creates the false impression that artificial intelligence (AI) will require very little regulation or governance. When one considers that AI will alter nearly every facet of contemporary life, the ethical and legal challenges it poses are myriad. The authors are correct that the futuristic fear of existential risks does not justify overall regulation of development. This, however, does not obviate the need for monitoring scientific discovery and determining which innovations should be deployed. There are broad issues as to which present-day and future AI systems can be deployed safely, whether the decisions they make are transparent, and whether their impact can be effectively controlled. Current learning systems are black boxes, whose output can be biased, whose reasoning cannot be explained, and whose impact cannot always be controlled.

Though supporting a “pause” on the development of lethal autonomous weapons, the authors sound out of touch with the ongoing debate. They fail to mention international humanitarian law. Furthermore, their examples for “human-in-the-loop” and “human-on-the-loop” systems—Israel’s Iron Dome and South Korea’s sentries posted near the demilitarized zone bordering North Korea—are existing systems that have a defensive posture. Proposals to ban lethal autonomous weapons do not focus on defensive systems. However, by using these examples the authors create the illusion that the debate is primarily about banning fully autonomous weapons. The central debate is about what kind of “meaningful human control” should be required to delegate the killing of humans to machines, even machines “in” or “on” the loop of human decision making. To make matters worse, they suggest that a ban would interfere with the use of machines for “clearing mines and IEDs, dragging wounded soldiers out of the line of fire and civilians from burning buildings.” No one has argued against such activities. The paramount issue is whether lethal autonomous weapons might violate international humanitarian law, initiate new conflicts, or escalate existing hostilities.

The authors are strong on the anticipated decimation of many forms of work by AI. But to date, political leaders have not argued that this requires regulation or relinquishing research in AI. Technological unemployment is not an issue of AI governance. It is a political and economic challenge. How should we organize our political economy in light of widespread automation and rapid job loss?

From cybersecurity to algorithmic bias, from transparency to controllability, and from the protection of data rights and human autonomy to privacy, advances in AI will require governance in the form of standards, testing and verification, oversight and regulation, and investment in research to ensure safety. Existing governmental approaches, dependent on laws, regulations, and regulatory authorities, are sadly inadequate for the task. Governance will increasingly rely on industry standards and oversight and on engineering means to mitigate risks and dangers. An enforcement regime to ensure that industry acts responsibly and that critical standards are followed will also be required.

Wendell Wallach

Scholar

The Hastings Center

Yale Interdisciplinary Center for Bioethics

Designing chestnuts

In “Philosopher’s Corner: Genome Fidelity and the American Chestnut” (Issues, Summer 2017), Evelyn Brister presents a well-written and balanced account of the state of affairs regarding efforts to work around the blight plaguing these trees, and as someone who is in the middle of things, I have nothing to debate. But I would like to clarify and expand on some points.

Her claim that “Restoring the American chestnut through genetic engineering adds about a dozen foreign genes to the 38,000 or so in its genome” needs some clarification. It is true that we have tested dozens of genes singularly, and in combinations of two and three genes, but the first trees we will use in the American Chestnut Research and Restoration Project will have only two added genes. This is a small point, but the more important point is that the genetically modified American chestnut that we will use first will have all of its original genes. Therefore, it should be as fully adapted to its environment as the original, with only blight resistance added. Unlike in hybrid breeding, where you may introduce genes for unwanted traits, such as short stature or reduced cold hardiness, genetic engineering keeps all of the original genes intact and adds only a couple of genes.

In our work in the restoration project, we used an oxalate oxidase (OxO) gene taken from an enzyme in wheat to confer blight resistance in the American chestnut. This enzyme detoxifies the oxalic acid that the troublesome fungus uses to attack the tree. So it basically disarms the pathogen without harming it. But this OxO gene isn’t unique to wheat. Oxalate oxidase enzymes are found in all grains tested to date, as well as in many other plants, such as bananas and strawberries. In fact, the chestnut itself has a gene that is 79% similar to a peanut oxalate oxidase. So, the “genome integrity” that Brister discusses is not a simple concept, and defining it simply by the source of a few added genes is meaningless. It is better defined by how large of phenotypic, or functional, change is being made and how this affects the organism’s place in the environment. With the American chestnut, the change is very small and allows the tree to return to its natural niche in the forest.

Genetic engineering isn’t the answer to all pest and pathogen problems, but in some cases it is the best solution. It is only one tool, but it is a useful tool that shouldn’t be left sitting idle in the toolbox.

William A. Powell

Professor and Director, Council on Biotechnology in Forestry

Director, American Chestnut Research and Restoration Project

Scientist-in-Residence, SUNY College of Environmental Science and Forestry

Should a genetically modified, blight-resistant American chestnut be reintroduced to eastern North American forests? Evelyn Brister contends that this question cannot be easily answered by an objective, all-knowing science, but is instead rooted in philosophical concerns about genetic purity and naturalness. Her discussion of genome fidelity and comparison of breeding and genetic modification offers valuable nuance to the public discourse on the American chestnut and genetically modified organisms (GMOs) more generally. But in her focus on philosophies of this tree’s genome, Brister seems to downplay concerns about harm to health and environment and social and economic impacts, noting that GM chestnuts are more likely to cause ecological good than harm, and that “the economic imperialism that has followed corporate control of GMO intellectual property” is a “nonissue” because researchers have pledged to make the GM tree publicly available.

In my own research on chestnut restoration, I have found that there are crucial political, economic, and ecological concerns that drive opposition and hesitation to GM chestnuts, and these concerns extend beyond issues of genome fidelity. Some observers worry, for example, that the blight resistance of a GM chestnut may not be sustained over the long term if the blight fungus adapts or if added genes are silenced, rendering chestnut restoration a costly and wasteful undertaking. Others hesitate to champion a project that has received financial and material support from the biotechnology industry, including ArborGen and Monsanto, fearing that the chestnut is being used as a ploy to sell the US public on the value and necessity of GM trees. Relatedly, there is concern that rapid regulatory approval of a GM chestnut will set a precedent for how commercial GM trees are viewed and regulated in the future.

Still other people are primarily concerned with inadvertent ecological effects: How will a genetically novel tree affect existing forest dynamics, food webs, and carbon cycling? How will it affect the spread of invasive pests, such as the gypsy moth, and health risks, such as Lyme disease? There is some initial evidence, for example, that gypsy moths may feed more heavily on a transgenic variety of chestnut, possibly leading to increases in gypsy moth populations, as Keith Post and Dylan Parry noted in an article in Environmental Entomology. Other research has suggested that chestnut restoration—whether through backcross breeding or GM techniques—may alter the geography of Lyme disease and potentially increase risk of transmission.

In short, opposition and hesitation to a GM chestnut are not merely rooted in philosophical concerns about genome fidelity, but are also centered on the broader political, economic, and ecological impacts that the tree may have in the world.

Brister notes in her conclusion that the debate about a GM chestnut “requires that we weigh metaphysical concerns about genetic purity with practical and ethical concerns about forest diversity,” which suggests that opposition is based primarily on metaphysical concerns whereas support is based on practical and ethical concerns. She deems it likely that “maintaining healthy forests will require not only the use of genetic technologies to modify tree species, but also to control the pests that are killing them,” and she further states that “we can’t afford to miss the value of our forests by getting lost in debates about the trees.”

This line of reasoning is tempting, but also silencing: it closes off debate and insinuates that questioning GM trees may be detrimental to the state of forests more broadly. I would encourage everyone to ask: Where does this idea—that genetic modification of tree species and pests is necessary to maintain forest health—come from, and what evidence is there for it? Perhaps more important, what other options and strategies are overlooked, foreclosed on, or disinvested in when we decide that healthy forests require molecular interventions?

Christine Biermann

Assistant Professor of Geography

University of Washington

Evelyn Brister describes two research programs that aim to restore the American chestnut to US forests by making it blight-resistant. One program has created a hybridized American chestnut by using traditional genetic backcrossing; the other has created a blight-resistant genetically modified (GM) American chestnut. In her article, Brister explores the likely objections to the GM option.

Brister focuses on loss of “natural integrity” as the main concern raised by the GM chestnut. But as she indicates, the idea of natural integrity is problematic. It is not obvious that a hybridized chestnut has more natural integrity than a GM chestnut. And why should natural integrity matter anyway, especially when forest diversity is at issue?

Brister is right to raise these questions, but there’s more at stake than she suggests. She identifies natural integrity with “genetic integrity” or “purity,” interpreted as something like “closeness to the original genome of the American chestnut.” Certainly, some people will be concerned, in both cases, that the genetic composition of the new chestnut trees lacks purity in the sense of genetic closeness to the ancestor chestnut. But worries about naturalness frequently also concern how something came about, not just what it is composed from.

The degree of worry about both types of chestnuts might be related to the degree of intentional human interference involved in producing them. In the case of the GM chestnut, this is especially likely to lie behind concerns about insertion of a wheat gene to enable resistance to blight. It’s not just that the wheat gene is less natural in the sense that it is normally located in a more genetically distant plant. It’s also that the wheat gene could not have gotten there without human agency. Likewise, hybridizing an American chestnut with a domesticated Chinese chestnut draws on the long heritage of human agency required for the creation of domesticated trees.

Opening up questions about human agency, though, introduces other broader concerns about “wildness.” Suppose either of these chestnut varieties is planted in “the wild.” Would the forests into which these trees are introduced remain “wild” after we have deliberately released them? And would they require further human interventions once they have been planted, essentially creating a managed woodland?

Unlike Brister, we think that the potential ethical conflict here is not just about genetic purity, but that much broader wildness values are at stake. These trees will have a genetic set determined by people, and they will be planted and managed at a time and place, and for a purpose, determined by people.

Of course, perhaps there is no realistic alternative to a human-originating forest. Or even if there were such an alternative, it may be that the value of forest diversity should indeed outweigh not only genetic purity but also other wildness values. But nonetheless, we should not underestimate the importance of protecting the remaining wildness in US forests.

Clare Palmer

Professor of Philosophy

Texas A&M University

Peter Sandøe

Professor of Bioethics

University of Copenhagen

Search History

Some people go running or meditate; they recite mantras or affirmations, carry pictures of the saints. My brother used to keep one of those mini Zen rock gardens in his room as a teenager, turning over his thoughts as he raked the sand back and forth. But for me, there’s nothing like pouring anxious feelings into an empty Google search bar, posing questions too big for any one person to answer.

Some questions I’ve asked Google, between the ages of 11 and 26, in roughly chronological order:

This searching is the only prayerful thing I do, though I admit that as a form of prayer, Google search is problematic. Christianity emphasizes that the purpose of prayer is not to find answers, assuage existential anxiety, or to get things for ourselves. Instead, it’s a means of knowing God, something closer to surrender. Through knowing and accepting God, we can begin to know and be at peace ourselves.

But maybe Google isn’t Christian; maybe it’s Buddhist. Everyone in the internet-connected world is familiar with Google’s uncluttered homepage: a single, rectangular search field with two buttons underneath, fixed in the middle of a white screen. Google’s colorful logo—originally designed to evoke toy building blocks—appears above the bar. The word “search” appears only once on the page, in the left button below the search field, though you can find it again by clicking on the square “app grid” button at top right, a feature added in 2013. Google has been praised for the minimalism of its homepage since its inception.

The Google homepage has been called “Zen-like” more times than are worth counting, though the last time I entered the Google search query “google zen-like homepage” it returned 13.4 million results. When I Googled “what is Zen,” I got this list in my third result:

Bodhidharma, a fifth-century Buddhist monk, described Zen as a “direct pointing” to the mind and heart. He said it’s a practice of studying the mind and seeing into one’s nature. You sit, not expecting enlightenment to strike, but in concentration, waiting for things to be revealed to you over time.

This is how Google works: instead of giving you search results based on how many times your search query appears on a given website, it crawls the internet to determine how many times sites relevant to your search query are linked to other relevant sites. Then it ranks the sites and lays them out in order. This algorithmic ranking and delivery of the most relevant results is called “search quality.” Udi Manber, a former vice president of engineering at Google, described the still-highly-guarded specifics of the algorithm as the company’s “crown jewels.” Indeed, one of the most famous parts of Google’s ranking algorithm is PageRank, the rating system developed by Google cofounders Larry Page and Sergey Brin when they were Stanford PhD students.

I didn’t know any of this, despite having used Google since the company’s incorporation in 1998. I was 11 years old then. I can’t remember a time when searching for “university” on an early search engine such as AltaVista delivered the Oregon Center for Optics homepage as the first result, though apparently that’s what happened in the mid-1990s. And although the summer Olympics were held that year, searching for “Olympics” on AltaVista returned mostly spam.

Larry Page once described the perfect search engine as something that “understands exactly what you mean and gives you back exactly what you want.” His description sounds kind of touchy-feely—like the friend or partner who intuitively knows what to say to you when you’re upset. But in fact, what makes Google feel this way are two highly technical components.

The first is Google’s reliance on natural language, the term of art for searching based on human speech rather than computer commands. Squarely in the realm of artificial intelligence, natural language is why you can enter “hot dog” as a search query and Google can understand that you mean the open compound word and food item, rather than a perspiring poodle. Google is revolutionary in that its interpretive process is continually refined through tracking its user queries, which provide an enormous sample of how people speak naturally. Google historian Steven Levy writes, “Google came to see that instant [user] feedback as the basis of an artificial intelligence learning mechanism.” In other words, the more we search, the more Google learns to talk and think like us. It draws ever closer to always knowing exactly what we mean and what we’re looking for.

This progression was most visible with Google Instant, the 2010 search enhancement that predicted what your search query would be and displayed search results as you typed. Clicking “search” was made unnecessary. Google’s official press release from that time explained that the thinking behind Google Instant was that people read faster than they type. Whole seconds were saved by letting users scan instant search results so they could refine their queries on the fly, rather than having to retype them. But the actual feeling of using Google Instant was that the search engine was thinking faster than me; before I could even fully think of how to ask my question, it understood and was answering.

“There is a psychic element,” then-Google vice president Marissa Mayer told a press conference, “because we can predict what you are about to search on in real time.” After seven years, Google did away with Instant in July 2017, stating that the feature is less fluid for mobile searchers. Still, auto-complete results show up in a drop-down menu below the search bar, drawing on decades of logged language. Even without Instant, Google still phrases my queries better than I could (and in milliseconds). It’s me without the clumsiness of my communication.

The second technical component is the way Google returns search results with more information (“gives you back exactly what you want,” in Page’s words). In earlier search engines, results were returned based on how many times your query appeared on a webpage. But the expansion of the internet presented a problem: maintaining search quality when analyzing millions of websites becomes difficult with no way to determine relevance. But since Google search exploits the link structure of the internet, more websites simply means improved search quality. New websites supply more links, giving Google more clues to determine a particular website’s relevance to your search query.

And the more the web expands, the more complex and dynamic the portrait of user behavior becomes. Google finds failures in its ranking algorithm and then works to correct them. The more we search, the more Google can know about us. Google is recursive, circular. Empty and full, all-encompassing. Google was designed to be “the ultimate learning machine.”

I submit that the internet is the greatest human achievement—the integrated whole of human knowledge. In the mere fact of its integration, it is superhuman, beyond any one of us and inaccessible in its entirety. We need a medium to reach across the digital ether and speak to it. This is where Google comes in.

Like an oracle, Google can access and interpret the world beyond, though it is still essentially of this one. To me, the internet feels infinite, inscrutable, but the experience of coming to Google is still individual, intensely personal. Rather than being a superhuman artificial intelligence, what if Google is simply more human than all of us—imbued with all our semantics and our behavior and our private inquiries and our thoughts? It has learned to know us exactly as we try to know ourselves. Disembodied, it is the questioning impulse itself. I love it and hate it for precisely this reason.

The word “search” originated in the twelfth century, from the French cerchier meaning “to search.” It’s one of those words whose etymology can be slightly frustrating, because its place in language is apparently so singular and essential, its history is just iterations of the word itself. But “search” has its root in the Latin circare, to “go about, wander, or traverse” and circus, ring or circle. There is something recursive about it: to search, one goes round and round.

Technically, I have search in my email. I have search on my desktop. I have search in my pocket on my smartphone. I carry it with me; I shouldn’t need to search for it at all. But here I am, searching.

My favorite bit by the comedian Louis C.K. is about how children are always asking questions. He starts by admitting that before he became a father he used to judge parents for their reticence to answer their children’s questions. He recalls watching a parent shut a kid down in a McDonald’s, telling him to just keep quiet and eat his damn french fries. But why not answer questions, C.K. asks his audience sarcastically, and expose your children to many wonders of the world?

“You can’t answer a kid’s question!” C.K. explodes. “They don’t accept any answer! A kid never goes, ‘Oh, thanks, I get it.’ They just keep coming, more questions: why why why, until you don’t even know who the fuck you are anymore at the end of the conversation. It’s an insane deconstruction!”

A conversation that begins with his daughter asking why she can’t go outside because it’s raining spirals out of control into analyzing why we’re here and C.K.’s admission that we’re alone in the universe.

He says, “At the end it’s like:

Why?

Well, because some things are and some things are not.

Why?

Well, because things that are not can’t be!

Why?

Because then nothing wouldn’t be! You can’t have fucking nothing isn’t—everything is!!

It’s possible that I never outgrew this phase of life. I’ve always loved asking questions, especially why questions. I was raised in a largely secular family: my father a Reform Jew—the most liberal branch of Judaism that embraces the idea of a personal god—and my mother a lapsed Christian. In the abstract, at least, both my parents saw the value of religion, of engaging with something higher than yourself to find meaning and purpose. But both were also humanists and education researchers. Being a researcher, who lives and dies by the scientific method, comes with a built-in agnosticism and a low tolerance for woo-woo explanations. I’ve heard my father say that if something can’t be measured, it could just as easily not exist. Research was my parents’ chosen vehicle for making sense of the world, and they devoted their careers to answering difficult questions using it: Which students do better in school and why? Why did this or that social program not work? How do we judge the value of an education?

But science (and my parents) also acknowledge there are no definitive magic answers for us humans. Knowledge is gained only through deep thought and putting in hard work to arrive at the most plausible conclusion—and a plausible conclusion is as good as it gets. This is also a recurring religious lesson: aspire to godly knowledge at your own peril. Prometheus was sentenced to suffer for eternity; Adam and Eve were expelled from the garden, made mortal. In asking for divine answers, Job only compounded his own misfortune by seeking divine answers. Scientific research is humble in its own way. It’s keenly aware of its human limits. My father’s email signature still reads: “In God we Trust, all others bring data.”

It’s possible this left me wanting. I believed in data of course, but they weren’t enough. When I was young, my parents got wise and gave me a tape recorder to talk into, to keep me entertained and eventually exhaust myself. Somewhere in their closets there are cassette tapes full of me rambling and counting, then asking what numbers are and how high they can go, then asking about infinity, and at some point, probably asking about God.

“You were also really into spreadsheets,” my dad tells me.

I guess no one was surprised when I decided to study philosophy in college. I thought: this is the place where all my questions will finally be answered. It’s an ancient discipline, after all. But searching for answers through philosophy turned out to have the same pitfalls as a six-year-old’s argument with Louie C.K. You could swap recordings of my class discussions on the metaphysics of Parmenides for C.K.’s whole routine, and you’d end up in the same place.

When—a little more than halfway through my degree—I first asked Google if I should be studying philosophy, the top results were pretty much what you’d expect. They remain unchanged today, in fact, and are all from university philosophy departments. You can tell they’ve tailored responses to all of their anticipated readers. For the college student with my pretensions: To study philosophy is to grapple with questions that have occupied humankind for millennia, in conversation with some of the greatest thinkers who have ever lived. For that student’s possibly nervous parents: the philosophy major helps students gain critical thinking skills. You’re majoring in thinking. For the career-minded: You will acquire analytical skills crucial for success in many different areas. And (as was oft-repeated in my own college philosophy department): Did you know philosophy majors consistently have the highest LSAT scores?

But when I clicked deeper through the results back then, I also turned up a bunch of online discussion boards full of other disillusioned philosophy students. We all echoed our departments’ boilerplate: I studied philosophy to find answers. But then, everyone agreed, all I found were more questions. The scary thing about studying philosophy, others had commented, was not that you don’t get your questions answered, but that you start to doubt the value of questions and answers at all. You start to see that knowledge, as you had conceived of it, is relative and mutable. Can it even be studied? We’d all begun our degrees searching for Truth, capital T, only to realize such a search is foolish and will get you laughed out of the room. Someone should tell you to get over it, eat your damn french fries.

When I first began writing about search, one of my roommates, Dave, was a computer scientist at Carnegie Mellon University. Carnegie Mellon is home to one of the top computer science programs in the world. Some of Google’s most important employees—pioneers of search and artificial intelligence—studied and taught there. Dave would tell you that one of his degrees is in computer science but he wasn’t technically part of the School of Computer Science; he was in the Department of Engineering and Public Policy, which is in spitting distance of computer science. Two of my other roommates, biomedical and civil engineers, loved to goad Dave and say that computer scientists aren’t really engineers.

Dave told me then, “I would define search as selective information delivery. Well, retrieval and delivery.”

When Dave and I first discussed search, he said a lot of things I couldn’t make sense of or put into context until I understood and internalized how Google works. He said one of the keys to searching is being able to quantifiably determine, à la Louie C.K., if something is or is not there. He said Google has gotten better at encoding people’s feelings and thoughts than it used to be. It’s moved beyond just being the most effective at matching up ones and zeros, and toward the idea of searching for concepts.

I told Dave that Sergey Brin said he wanted Google to be as smart as its user. “But I don’t think there’s any argument to be made against the idea that it’s smarter than me.”

“I think there’s a difference between being book smart and being wise.”

“How do you mean?”

“Well, imagine you go to the biggest library in the world, and rather than use a card catalog, there’s a librarian with super speed. She can retrieve anything you want from the library almost instantly. Is she smarter than you? Or is she just really good at performing that one task, at doing what she was designed to do?”

Dave reminded me of one of the dimensions of my love for Google. That is: its ability to retrieve practical information in the exact instant that I need it. Things such as directions somewhere, where food is, which brand I should buy, what time an event starts. Sometimes my search history is not pretty. Sometimes it might make you wonder how I’ve survived thus far as an adult in civilized society (though I suspect I’m not alone in this).

Sometimes it looks like this (age 25):

Or this, from over spring break and at the South by Southwest Interactive Festival (age 24):

There are times I see Google as a loving parent, leading me through the world. It (apparently) tends to me when I’m drunk, shepherds me from place to place and brings me safely home, tells me everything I want to know about whales—all without judgment, resentment, or even hesitation. It can tolerate endless questions. There are times I think I would be dead without it.

Dave asked me how we distinguish between smart and wise (an artificial intelligence question). I parroted Socrates: wisdom begins with self-knowledge.

Dave: “Google has no self-knowledge.”

“Yet.”

“When you’re searching, you’re still searching all human content. Google isn’t generative. It can’t be.”

Despite Google’s seeming inertness, there’s still something appealing about the idea of having access to the whole of human content. It comforts me to know that it’s there, that I can call on it whenever I want. I can always pose the same tired questions—why we’re here or how the universe began—and know that whatever answer Google brings back is the best we’ve collectively come up with so far. Google alone can do this. It makes me hopeful. And it makes me feel less alone.

Me: “I know what you’re saying. Sometimes I ask Google questions I know it can’t answer. It just feels to me like such a benevolent God, trying to help and guide me.”

Dave: “I know, I anthropomorphize technology all the time.”

“You should ask it if you’re an engineer.”

“I actually asked it that the other day: ‘why am I not an engineer?’”

Nietzsche said the most tragically human impulse was to question. All philosophy, he posits—the endless slog toward truth—is a not-even-thinly-veiled attempt at religion, whose purpose is to provide some justification for our suffering. Truth as God. Truth seekers as pilgrims. And all of this vain searching denies life as it is—a constant flux and power struggle, nothing more. Searching for truth in that kind of world is a sad farce, which does nothing but diminish the spirit.

When Western philosophy inevitably fails me, I always circle back to Buddhism. Unlike my education in continental philosophy, with Eastern religions, I’m a dilettante—a stereotypical American who passingly wants to master stress management. And even at that, I’m pretty terrible. Being mindful in rush-hour traffic and visualizing myself as a flower are fine, but at the end of the day, I fail to meet the best-known Buddhist prerequisite: to leave all attachment at the door. To come to Buddhism searching for anything—truth, wisdom, inner peace, enlightenment—is the surest way never to attain any of those things. It’s a religion of nothingness, of emptying out, a process that begins when you stop wanting to be religious.

Jiddu Krishnamurti, a philosopher whom the Dalai Lama called “one of the greatest thinkers of the age,” says that if you deny the traditional approach of seeking truth, then you will find that you are no longer seeking. He writes, “That is the first thing to learn—not to seek. When you seek, you are really only window shopping.” In the same vein as Nietzsche, he repeatedly refers to truth as “a pathless land.”

A therapist once advised me, in the spirit of quelling my lifelong anxiety, to always return to my breath, in and out—a Zen Buddhist practice. Just this, just this, she said to repeat on the inhale and exhale, something I took as its own small prayer.

I thought this was mostly worthless at the time. For me, bouts of anxiety never feel like just this, but the opposite. They’re the intrusion of the whole incomprehensible world, which I feel ill-equipped to understand and then terrified to live in. I could pray for wisdom or a greater sense of inner peace—like the Serenity Prayer, to accept the things I cannot change. But I’ve also read that at bottom, anxious, questioning people harbor a secret wish for control. We want to make the world known and manageable. Isn’t this part of the reason why children ask questions?

I pose as if what I want is to earnestly search, to make some kind of digital pilgrimage, but that’s not really what I want. I don’t seek space for questions, to let them hang and maybe have things revealed to me. I don’t want to do the hard work of detachment or faith and acceptance. It is hard for me to see prayer as anything more than an outlet for my private melodrama. What I really want is instant gratification, answers on-demand.

I know all of these philosophers are right. I’m guilty of all their charges: I am a child, wanting easy answers, a god craver, mired in attachment and worldliness. A sinner. I feel constantly betrayed by my youth. I hate myself for seeking childish things—truth, meaning, the possibility of a loving god. For asking for these things from a mystical series of algorithms. But I want them still. Even though I can imagine Nietzsche, my therapist, the Dalai Lama, and every professor I’ve ever had—all shaking their heads, handing me a tape recorder.

The only time I did poorly on an English paper was writing five pages about Oedipus Rex. The assignment was simple, borderline cliché: identify what doomed Oedipus and analyze what makes the play a tragedy. I knew my teacher wanted me to write about hubris, to give the classic reading that in denying his own fate, Oedipus sealed it. Moral: Don’t mock the gods with your vanity.

But reading the play, I felt sorry for Oedipus. He’s a blowhard to be sure, harassing Tiresias the prophet and bullying everyone to get information. But what really drives him is a desire to learn the truth about his identity. It’s a universal need. And it’s a mission no one else in the play will take up, even though the health of the entire city of Thebes depends on it.

At one point in the play, Jocasta, Oedipus’s wife/mother—though the latter is yet to be revealed—implores him to drop his whole investigation. Oedipus says that he won’t do it, that not knowing the truth will only bring him more distress. “O you unhappy man!” Jocasta replies. “May you never find out who you really are!” Suspending Freudian analysis for a moment: this is his own parent, telling him to quash the question who am I?

Schopenhauer once wrote, in a letter to Goethe, that what makes “the philosopher” is “the courage to make a clean breast of it in the face of every question.” Schopenhauer compares the philosopher to Oedipus, who, “seeking enlightenment concerning his terrible fate, pursues his indefatigable inquiry, even though he divines the appalling horror that awaits him in the answer.” He adds, “But most of us carry in our hearts the Jocasta who begs Oedipus for God’s sake not to enquire further.” Nietzsche later mocked Schopenhauer for this false heroism, what he called a deeply misguided “will to truth.” It’s only vanity, Nietzsche says. Hubris.

In my paper, I took up Schopenhauer’s claim. I argued that truth-seeking was Oedipus’ tragic flaw. My teacher gave me a barely passing grade. “A closer reading would’ve revealed that it’s all hubris,” she commented. At the time, I was livid. Who wants to read about hubris over truth-seeking? But I’ve started to wonder if the two might be closer than I thought.

Some theorists argue that technology such as Google search will kill off the questioning impulse. Ray Kurzweil, for example, has popularized the concept of a technological singularity, when computers advance to a point of superintelligence such that their predictive capacities outstrip our own. A post-singularity world, he posits, will be full of technology that goes beyond the control of a single person, perhaps like the restless, yearning operating system in Spike Jonze’s movie Her. For my part, I imagine the singularity as Super Google. Right now, as Dave said, Google can access information, interpret it, but not “understand” it in a human way. But it’s arguably a fine distinction.

The writer Mike Thomsen compares this moment—Kurzweil predicts we’ll cross over around 2029—to when dogs separated themselves from wolves and became domesticated.

“Years from now,” Thomsen writes in The New Inquiry, “what we think of as a computer will look on our efforts to work out logic problems with the same paternalistic appreciation we feel when dogs stop to inspect a promising pile of trash on the sidewalk, hoping to find in it something meaty.” Our pets, he says, will never resolve their “instinctive questions of hunger,” but they also don’t need to. They have us to lead them, and we enjoy having them around, too, so the end is just mutual enrichment.

I am similarly split between the child and the adult, the seeker and the enlightened one, who simply accepts. I think about going to my computer and searching how to let go.

Navigating an Uncertain Future for US Roads

The last time the automotive industry in the United States experienced rapid technological change was more than a century ago. In 1900, the industry (whose primary competitor was the horse) comprised 40% steam-powered, 38% electric-powered, and 22% gasoline-powered vehicles. After the advent of mass-production, made famous by Henry Ford’s 1908 Model T, the internal combustion engine rapidly became the dominant automotive technology, and by the 1930s competing technologies were all but extinct. Since then, the industry has followed a century-long trajectory of steady, incremental innovations that have gradually improved vehicle performance across a variety of metrics such as horsepower, fuel efficiency, emissions, and safety.

Today, the automotive industry is beginning to enter another period of rapid change with the emergence of three revolutionary technologies: electric power trains, autonomous vehicles, and ride sharing. These revolutions have the potential to shape not only the trajectory of automobile design and performance but also the long-standing automotive regulatory environment and the entire personal transportation system. Despite their disruptive potential, all three technologies still require one critical piece of infrastructure: roads.

The nation’s more than four million miles of roads play a vital role in the economy. But maintaining them is expensive. According to the American Society of Civil Engineers, the United States will need to spend more than $3.4 trillion on infrastructure through 2020, half of which will be for roads, bridges, and transit. Today, funds for infrastructure spending primarily come from taxing fuel consumption—a policy approach that is desperately outdated and unable to cope with the changes brought by emerging automotive technologies. The steady decrease in fuel consumption brought by increased average vehicle fuel economy and greater adoption of electric vehicles already threatens state tax revenues and the solvency of the Federal Highway Trust Fund. Nor does the fuel consumption tax offer the flexibility necessary to adapt to new road use patterns enabled by autonomous and shared vehicles.

Although the funding situation for roads is grim, it is not too late to make the changes necessary to avoid disaster. Replacing the fuel consumption tax with a tax on vehicle miles driven is one possible solution that more accurately tracks with road damage and offers a wider range of options for navigating the looming revolutions of electric, autonomous, and shared vehicles.

The system is broken and broke

Established in 1956, the Federal Highway Trust Fund bankrolls highway construction and maintenance, and for the past 60 years it has primarily depended on excise taxes on gasoline and diesel. Over the past decade, expenditures have continued to increase, running between roughly $40 billion and $50 billion annually, but revenues from fuel taxes have flatlined at roughly $35 billion to $40 billion annually due to steady improvements in vehicle fuel economy, changes in driving patterns, and inflation. Since the 2008 recession, the federal government has begun diverting funds from the General Fund of the US Treasury to make up shortages in the highway fund. The diverted sums have increased from $8 billion in 2008 and $7 billion in 2009 to $19.5 billion in 2010 and $22.5 billion in 2014.

The figure below illustrates the increasing dependence of the highway fund on these transfers to remain solvent.

Helveston fig1

Many states also levy fuel taxes to support their roads. But here, too, funding is falling short. An analysis by the media platform group Governing shows that state fuel taxes over the past two decades have not kept up with inflation in two out of three states. More than 10 states have had no choice but to raise gas taxes in recent years, and more than 10 are considering a raise in 2017. But keeping up with inflation is the least of the looming problems for federal and state infrastructure budgets.

Even if the Trump administration relaxes fuel economy standards (which might delay increases in fuel economy in the short term), the revenue gap is likely to worsen. Trends across all vehicle segments point toward continued improvements in fuel economy, which increased by 28% between 2004 and 2015. And more fuel-efficient, hybrid, and electric vehicles are coming on the market each year. Thirteen automakers offered at least one electric option in 2016. Tesla announced the 215-mile-range all-electric Model 3 (which received a record-breaking 400,000 preorders) and Chevrolet launched the 238-mile-range all-electric Bolt. The overall number of plug-in electric vehicle models on the market reached 25, up from 16 the previous year and just 3 in 2010. Battery prices are rapidly falling, and driving ranges are getting longer. Even though plug-in electric vehicles still represented less than 1% of all vehicles sold in the United States in 2016, mandates in 10 states require that 15% of statewide vehicle sales must be zero-emission vehicles by 2025, which, if achieved, would put three million new electric vehicles on the road. All those drivers will soon have the power to extend their ranges as the US Department of Transportation released plans in late 2016 to establish 48 national electric vehicle charging corridors, covering nearly 25,000 miles of highway in 35 states.

Faster adoption of electric vehicles is good news for the environment and national security, but the trend is gradually chipping away at federal and state infrastructure funds. A 2015 study by researchers at Carnegie Mellon University showed that greater adoption of electric vehicles could result in revenue generation reductions of $200 million to $900 million by 2025. But even without such an adoption explosion, federal fuel economy standards still require that the sales-weighted average fuel economy of all cars sold by each automaker in the United States must reach 54.5 miles per gallon by 2025, dramatically reducing gasoline tax revenues.

Tax miles, not gallons

The simplest near-term solution is to increase the fuel tax, but that would continue to put an unfair burden on rural households and drivers of less-fuel-efficient vehicles while letting electric vehicles owners off the hook for road maintenance—not to mention that modifying the tax has also repeatedly proven itself to be politically hopeless. The most recent federal increase in fuel taxes was in 1993, and that required then-Vice President Al Gore to cast the tie-breaking vote in the US Senate. State fuel tax raises have been more successful in recent years, but they merely provide temporary respite from the longer-term budget threat of more-fuel-efficient and electric vehicles.

For decades, experts have called for replacing the fuel tax with a system that more accurately tracks with road use, such as taxing vehicle miles traveled (VMT). That’s somewhat like taxing the gallons of fuel required to drive those miles, except vehicle miles are independent of the fuel consumed. Opponents of VMT taxes argue that they disproportionately affect lower-income and rural groups, but these worries may be overstated. A study at Oregon State University found that switching to a VMT tax would actually be less regressive than raising fuel taxes. Fuel taxes disproportionately burden rural households because rural drivers on average drive greater distances and in less-efficient vehicles than urban drivers. Although VMT tax schemes would still unevenly burden rural drivers for driving greater distances, they would eliminate the penalty paid for driving less-efficient vehicles. And while fuel taxes are the same for all citizens regardless of income, VMT taxes could be structured to be even less regressive by, for example, using different tax rates based on income brackets.

Opponents also argue that VMT taxes discourage the adoption of more-fuel-efficient vehicles. But that’s also what makes them less regressive. And whereas VMT taxes could be structured to charge lower fees for more-fuel-efficient or electric vehicles (making the taxes more regressive), the more important point is that VMT taxes have the flexibility to allow policy makers to decide how to make such trade-offs while maintaining a sustainable revenue structure as vehicle efficiency continues to improve. The fuel tax, in contrast, lacks such flexibility, and if it is to keep up with the nation’s infrastructure needs, it will become only more regressive with time.

Reduced fuel consumption may well be accompanied by increased miles driven. Such trends will further undermine the potential of fuel taxes to keep up with the expected increase in road wear and tear from a future of fully autonomous vehicles. For example, with even just partial automation, systems such as Tesla’s autopilot are making longer commutes less arduous. Full automation could enable commercial trucks to travel around the clock, dramatically increasing annual truck mileage and road damage, let alone the anticipated increase in private vehicle miles traveled. Some observers fear a future of “zombie cars” driving in circles with no passengers to avoid parking fees.

Yet other trends may push toward less vehicle travel. Ride-hailing providers such as Uber and Lyft could transform the way people take taxi rides, thanks to new ride-sharing services such as Uber Pool and Lyft Line where fares are split with strangers going in common directions. A recent study at the Massachusetts Institute of Technology concluded that ride sharing could reduce the number of taxis in New York City by 75% without significantly affecting travel times, resulting in lower average VMT and reduced fuel consumption. However, since ride sharing dramatically lowers the costs of taking a taxi, mass transit users may move toward shared cab rides over bus or rail alternatives and thus increase average VMT and fuel consumption. Research and experience will gradually reveal the effect these forces might have on overall VMT and fuel consumption, but in the face of such uncertainties, taxing the fuel consumed provides little flexibility for adapting to changing trends.

Indeed, uncertainty about when and how electric, autonomous, and shared vehicles will affect travel patterns is one of the strongest arguments for a VMT tax. By taxing the miles, regulators and other stakeholders can work together with greater flexibility to manage the societal benefits and costs of driving. Excessive autonomous driving can be discouraged by charging a higher rate when no passengers are detected, and rates could be dynamically changed to encourage more vehicle sharing during high-congestion periods. The specific details of how rates can be changed to achieve different societal goals is a question for future research, policy experiments, and political debate, but fundamentally it is not such a radical idea; many toll roads, for example, charge different rates depending on the number of axles a vehicle has, presumably due to their disproportionate contribution to road damage. With a VMT tax, other negative societal impacts from driving, such as pollution and congestion, can also be considered.

A tax for both parties

Repealing and replacing a decades-old fuel consumption tax will take serious political effort from both sides of the aisle. Fortunately, a VMT tax offers opportunities that could be attractive to both political parties. Republicans could trade a VMT tax for the removal of subsidies for alternative fuel vehicles such as electric and fuel cell vehicles. Democrats could support it because it encourages energy conservation and can be structured to more fairly tax drivers and different fuels than the consumption tax does. And any politician should be able to get behind a tax scheme that lowers the price of gasoline and diesel.

Although it may be feasible to achieve increasingly rare bipartisan support for a VMT tax, implementation could perhaps be the biggest challenge. Unlike fuel taxes, which are nearly impossible to avoid and easy to collect, VMT tax schemes can range from an annual odometer reading to real-time mileage reporting, with each approach facing potentially different implementation challenges. Fortunately, pilot VMT tax programs are helping policy makers understand the strengths and weakness of emerging options. Oregon’s OReGO program has 5,000 volunteers paying a 1.5 cents per mile tax using an onboard plug-and-play device that reports miles driven and fuel consumed while maintaining driver privacy (that is, it does not collect location data). California is conducting a Road Charge program over nine months with 5,000 volunteers choosing different reporting options and simulating payment for the miles they drive. University-led research initiatives such as the 3 Revolutions Policy Initiative at the University of California, Davis, are focusing on key policies and strategies, such as a VMT tax, to facilitate synergistic net benefits to society from vehicle electrification, vehicle automation, and vehicle and ride sharing. The data collected from these pilot programs and research initiatives will provide critical information for understanding the implementation challenges and driver acceptance of these systems and enable policy makers to devise tax schemes that are fair and commensurate with the funding challenges ahead. Increased city and state piloting and testing is an excellent area to start in building up the necessary knowledge and political momentum to achieve this much-needed change. The biggest challenge will be to translate what’s being learned in the laboratory of the states into a viable political strategy at the national level for replacing the fuel tax.

There’s no time to lose. Around one hundred years ago, the dominant form of transportation in cities was horses and buggies, and within a decade they were all but completely replaced by gasoline-fueled automobiles. History has shown countless similar examples of faster than anticipated technological change (for example, in a mere decade life without a smartphone has become unimaginable for many of us). A transition to a VMT tax today would enable local and national regulators to begin learning best practices in implementation so society can put in place roads and highway systems to accommodate tomorrow’s more-fuel-efficient, electric, shared, and autonomous vehicles. If we don’t act now, we’re likely to find ourselves with state-of-the-art cars on roads that are unfit for driving.

A Silicon Valley Catechism

Machine, Platform, Crowd

For over a decade, business books have exhorted managers to be “supercrunchers”—numbers-obsessed quantifiers, quick to make important decisions as “data driven” as possible. There is an almost evangelical quality to this work, a passionate belief that older, intuition-driven decisions are a sinful relic of a fallen world. With Machine, Platform, Crowd: Harnessing Our Digital Future, Andrew McAfee and Erik Brynjolfsson aim to formalize the successive canonizations of statistics, big data, artificial intelligence, and machine learning into a consultant-friendly catechism of what smart business leaders should do today. Chapter summaries deliver a snappy series of commandments; chapter-ending questions are designed to ensure readers (presumably managers and B-school students) have not missed the point.

For McAfee and Brynjolfsson, firms such as Google, Uber, Facebook, and Airbnb are the business models of the future. The authors emphasize certain secrets of success at such platforms: scale quickly, achieve user lock-in, and extract revenues by taking a cut of transactions linking consumers and advertisers or service providers. They gloss over tactics such as regulatory arbitrage and tax avoidance. Large platforms now command so many resources that their lobbying efforts can easily swamp those of fragmented and uncoordinated incumbent firms. Whether this is a feature of contemporary capitalism or a bug is left unclear.

To be fair, the authors have in prior work laid out an impressively comprehensive vision of a new social contract for an era of automation. They have advocated for a larger government role in educating workers for new, higher-skilled jobs and ensuring subsistence for those left behind by technological change. But this broad-minded, even magnanimous approach is not much in evidence in Machine, Platform, Crowd. Instead, we see advice aimed at accelerating the types of disruptive social change that their past work was more cautious about.

McAfee and Brynjolfsson’s work on platforms would be improved if they took critical voices more seriously. For example, consider platforms for labor such as Uber. Uber’s gig workers were among the first to realize that the vaunted flexibility offered by the sharing economy may just be a cynical rebranding of job insecurity and a lack of benefits—what some scholars have termed “precarity,” implying a precarious existence. Uber drivers found that their digital bosses were often taking a bigger and bigger cut of revenues, as their own needs for steadier work and benefits were ignored. Researchers exposed how misleading official pay figures were, because they didn’t include costs such as gas, insurance, or car payments.

Uber’s customers have also started to complain. Surge pricing seemed arbitrary. Sometimes the same ride would cost much more for one person than another. Creepy tracking and rating practices proved shocking. Next, the government woke up. Judges and regulators started to force Uber and other firms to recognize that they were employers, not just software providers. For example, regulators discovered that many Airbnb properties were stealth hotels owned by the very wealthy, not PR-friendly “mom-trepreneurs.”

The story of platforms is a lot less sunny than the narrative presented in Machine, Platform, Crowd, once their negative effects are fairly tabulated. Do we really want the kingpins of Uber and Lyft wielding outsized power over transportation policy, or Airbnb to further centralize room-letting and perhaps move into virtual property management? The authors dodge the hard political questions that arise as the bloom comes off the rose of platform capitalism and antitrust scholars criticize the centralization of power that accompanies winner-take-all digital markets.

Nor are the machines at the core of McAfee and Brynjolfsson’s whiggish narrative of business progress as infallible as they suggest. In their take, machine learning reigns over all business (and perhaps even government and nonprofit) functions as a master profession. But as the sociologist Will Davies has observed, “A profession that claimed jurisdiction over everything would no longer be a profession, but a form of epistemological tyranny.” And it is a confused tyrant at that. At points in their narrative, the authors denigrate human experts in general, cajoling top managers and investors to subject every person they rely on to the crucible of comparison with some type of automated prediction engine. Yet most business and financial experts today already rely extensively on computational systems. What the authors really seem to be promoting is an ever more intense standardization of business practice on algorithmic terms. Are robot CEOs the logical endpoint of their program?

In earlier iterations of artificial intelligence, researchers tried to reduce human expertise to a series of propositions, rules to be applied by an expert system. It turns out that although this approach can work well for very narrow applications, it is difficult to formalize human reactions and skills into a series of rules. That difficulty has not turned out to be an insuperable barrier to various schools of machine learning.

For example, with enough data and computing power, machine learning experts can try multiple algorithms to optimize performance. McAfee and Brynjolfsson mention the difficult problem of managing the temperature of a server farm, and it is easy to see how a computer program could solve the problem second-by-second better than any human expert because there are so many variables (airflow, temperature outside, computational intensity in various parts of the building, and so on) that need to be computed nearly instantaneously. Moreover, a cutting-edge system can experiment, shifting allocations of cooling effort among, say, fans, air conditioners, and other methods, or determining whether a relocation of computing activity (toward, say, colder walls in winter) might be more cost-effective than increasing airflow in areas prone to overheating.

Various machine learning methods are now being developed by different schools of computer scientists. Basic pattern recognizers can map a classic response to a given situation. Evolutionary algorithms can spawn a large number of approaches to a problem, experiment with them, and determine which one works best, ready to be deployed in the future. Bayesian classifiers can weigh evidence about whether a given strategy is working or not, modeling causation along arcs connecting different nodes in a network. And some programs even compose approaches on the fly, coming up with the types of nonhuman intelligence that wowed commentators during the victory of AlphaGo, Google’s artificial intelligence program for playing this complex Chinese board game, against the reigning Go champion in 2016.

McAfee and Brynjolfsson begin Machine, Platform, Crowd with the story of AlphaGo, and quickly parlay it into a metaphor for an eventual, general advantage of machine learning driven approaches over human judgment. They contrast machines’ implacable, objective data analysis with humans’ tendency to distraction and subjective judgments. We finally know whether humans can win the “race against the machine” (the title of the authors’ first book together): beset by cognitive biases, they are no match for algorithmic decision making. McAfee and Brynjolfsson mock the usual business decision making as the mere opinion of the “Highest Paid Person in the Organization” (HiPPO); the evocation of a dumb, clumsy, oversized creature fated to be subdued by technology adds a frisson of rebellious cheekiness to their program.

Unfortunately for any manager looking to this book as a turnkey solution to strategy, the authors’ case for machine learning is overstated—even self-contradictory. To suggest that software can be optimized to make better decisions than humans, they offer a series of examples to demonstrate weaknesses in human judgment. A sociology professor used a mathematical model to predict firms’ adherence to budget and timeliness of product delivery better than purchasing managers. A county’s nonverbal IQ test included more minority children in a gifted program than a process centered around parent and teacher nominations. Law professors’ simple, six-variable model predicted Supreme Court rulings for the 2002 term better than 83 prominent legal experts did. From examples such as these, and a simple behavioral economics story about human susceptibility to instinctual rashness, the authors conclude that “The evidence is overwhelming that, whenever the option is available, relying on data and algorithms alone usually leads to better decisions and forecasts than relying on the judgment of even experienced and ‘expert’ humans.”

But where do the algorithms and data come from? As digital sociologist Karen Gregory has observed, big data is made of people. People develop algorithms to parse it. People are part of the “crowd” that McAfee and Brynjolfsson (following Clay Shirky’s Here Comes Everybody) praise for supplying data and labor to so many machine learning applications, in such diverse areas as spam detection and targeted ads. Sophisticated work in critical algorithm studies repeatedly emphasizes the intertwining of computational and human elements in decision making. So why are the authors so intent on maintaining an outdated dichotomy?

Even more damningly, the parade of examples they give of “superior” automated decision making are themselves no more than a narrative of computational supremacy. They give no sense of the universe of studies available on the comparative advantage of computation over human decision making, the applicability of these studies, or even whether their examples have been replicated or reanalyzed. Without grounding in such basic statistical concepts, their sweeping claims (one study on Supreme Court prediction is a clue to the future of the entire legal industry! One logistics model could eliminate vast swathes of human labor in that field!) will ring hollow to anyone with even the slightest critical faculty.

Of course, this probably will not be too great a commercial problem for McAfee and Brynjolfsson, since the most likely function of their book is to help managers justify workforce reductions and pay cuts. Like the management consultants brought in to offer a post hoc imprimatur for managerial decisions made long before, the current deluge of “creative destruction” dogma is a tool of rationalization in the Freudian, not Weberian, sense. But even the most cutthroat managers should think twice before hitching their wagon to the stars of machine learning. Not only is it “remarkably easy to incur massive ongoing maintenance costs at the system level when applying machine learning,” according to a group of Google researchers—it is also often hard to determine whether one has the right training data set even to begin a machine learning process. Furthermore, will strategies derived from past data still work? Perhaps we can apply machine learning to past machine learning efforts to find out, but given the proprietary status of both algorithms and data there, I would not hold my breath for that recursive strategy.

Advanced predictive analytics can also easily become a tool of discrimination. The all-too-human elements of machine learning were recently on display in an extraordinarily troubling paper by Chinese researchers titled “Automated Inference on Criminality using Face Images.” The authors used a machine learning algorithm, trained on a data set of faces of roughly 2,000 criminals and noncriminals, to “empirically establish the validity of automated face-induced inference on criminality, despite the historical controversy surrounding this line of enquiry.” They even provided four archetypal images of “criminal” faces. Critics denounced the study as a rationalization of discredited theories of caste, phrenology, and the innate inferiority of some human types. But the study authors stood firm, defending the importance and validity of their findings.

The facial criminality study raises tough questions for McAfee and Brynjolfsson. At the end of their cursory discussion of algorithmic discrimination, they try to defuse concerns by offering reassurances that machine learning systems can be corrected. (And their point is an even stronger one—that these systems are far more capable of being corrected than humans.) But aren’t there some machine learning projects that are simply too invasive or unethical to conduct? A firm now sells facial analysis technology to assess a person’s health status. Should a retailer use that software to figure out which customers to shun because they will likely be too ill to purchase much soon? Should it use the software on employees or job applicants?

Critical thinkers may also suspect that algorithmic reasoning can fail on its own terms. McAfee and Brynjolfsson praise Barack Obama’s presidential digital campaign team, but then explain away Hillary Clinton’s loss by suggesting that the “quality of data inputs” for her campaign’s analytics system, Ada, were flawed. But that kind of just-so story makes the authors’ claims about predictive analytics unfalsifiable: if they ever falter, low-quality data can take the blame. The hard questions arise in real time; Monday-morning quarterbacking can’t be taken seriously unless the authors specify what outcome would lead them to second-guess their preferred methods of prediction and management.

In McAfee and Brynjolfsson’s telling, predictive analytics emerges less as a way to ensure that current decisions are right and more as a method of organizing information to ensure better decisions in the future. They apparently believe that the tool is destined to become the prime method of improving business processes. But as long as free will and capacities for self-expression endure, the “crooked timber of humanity,” to borrow a line from the philosopher Immanuel Kant, will offer opportunities to resist the method’s rationalizing bent—and smart managers, whether of campaigns or businesses, will be cautious about biases, inconsistencies, and gaps in the data they use.

Character and Religion in Climate Engineering

A group of scientists at the University of Washington has proposed a field test of marine cloud brightening, during which saltwater would be sprayed into the air in an extremely fine mist. The goal is to determine whether it is possible to increase the reflectivity of nearby, low-level ocean clouds and thereby reduce global warming by reflecting more incoming solar energy. Such a test seems benign—after all, it uses only “natural” materials (saltwater, wind, clouds) to encourage a change in cloud reflectivity. Whether this test succeeds or not, it will offer data about how the climate system works, and so it will contribute to the effort of understanding, and perhaps reducing, the harms caused by the emissions of fossil fuel combustion and industrial agriculture. Yet even a small-scale field test of climate engineering raises complicated questions of morality and governance. The Spring 2017 Issues in Science and Technology discussed many such questions and their complexities.

Here we seek to point out a useful but often-neglected conversation partner that can aid these discussions: religion. Religious traditions offer concepts and vocabularies for addressing ethics and policy. Religion is formatively influential for a majority of the world’s population, but is too often ignored in discussions of the social dimensions of climate engineering. Though we are not suggesting that all ethics and policy must “be religious,” we do argue that everyone (believers and nonbelievers alike) can profit from analyzing the distinctive moral and political ideas emerging from religious traditions and worldviews. In particular, we hold that religion is important to broaden the conversation to include the moral issue of character.

Discussions of climate engineering frequently include some conversation about intention, examining not just what researchers plan to do, but also why. For example, climate engineering proposals are often justified by the fact that no political or economic prospect for emissions reductions in the near term seems realistically likely to limit climate change enough to prevent serious harm, particularly to vulnerable populations. In this scenario, climate engineering has a noble intention: to prevent some of the worst impacts of anthropogenic climate change. But this does not provide the whole picture, because it also matters who plans to do the work and how they will be held accountable to the rest of us. This opens the door to discuss the theologically grounded issue of character, which is broader and deeper than intention.

The question of character asks us to reflect on what kind of people and communities should be trusted to engineer the climate, or even to experiment with the possibility of engineering the climate. We propose that religion can be a guide for finding an answer to the question: What kind of character should we seek in the climate engineering research community of scientists, engineers, policy analysts, ethicists, and others?

Even asking the question might raise skepticism. Should scientists and engineers applying for grants and permits and the policy actors and ethics boards who approve them have their moral centers tested? In the current political climate, how could one nation, much less the international community, possibly agree on what kind of character we are looking for? Can we “operationalize” the notion of character, making it a productive concept for guiding policy and governance? Does opening the door to religion run the risk of seeding sectarian debates and divisions? Facing the urgent problem of climate change, do we really have time to start a new conversation about religion and character? These are important questions. To respond, we must first explain what we mean by character, how we can find resources to deepen conversation about it in religious traditions, and how it might apply to the particular case of an experiment to brighten marine clouds.

Religious traditions can help us talk about character

National discourse in the United States does not cultivate a robust discussion of character. The 2016 presidential election was frequently summarized as coming down to “a question of character,” with the two dominant candidates attacking one another as “unfit,” a vague statement of character at best. Since taking office, President Trump has continued this level of discussion on the topic, regularly responding to critiques of his staff by assuring the public that Steve Bannon or H. R. McMaster is “a good man” and that Donald Trump Jr. is “a high quality person.” This suggests that character is easily categorized—“good” or “bad”—and that it can be used to end moral and political discussions.

We need better resources to talk about character well, especially if we wish to use character to enhance moral, political, and even scientific discussions. A vital source for these resources comes from religious traditions, which have spent millennia defining admirable habits of character and developing resources to nurture those habits in people. The Book of Proverbs in the Hebrew Bible and the Beatitudes in the New Testament both articulate character traits that Judaism and Christianity seek to cultivate, such as wisdom, faith, mercy, and compassion. The Hindu Bhagavad Gita uplifts, explains, and cultivates the character trait of dutiful action, among others. Confucianism emphasizes habits such as propriety, honesty, and integrity, and teaches students how to live them out.

Whatever else they are, religions are moral traditions with deep insight into how people should behave and how good character can be nurtured. This means that religion is profoundly relevant to any discussion of climate engineering and offers a set of resources to talk about the challenging moral and political questions it raises. Furthermore, religions need not compete on questions of character. Though emphases and contexts differ greatly, there are few doctrinal debates within or between traditions on the subject of good character. Instead, we find mutually upbuilding lessons about how to cultivate good people, and these lessons have a great deal to offer aspiring climate engineers.

The French philosopher Paul Ricoeur offers a helpful articulation of what it means to talk about character. In Oneself as Another, the book that emerged from his contribution to the famous Gifford Lectures on natural theology, Ricoeur defines character as “the ‘what’ of the ‘who.’” Character is the aspect of one’s identity that persists over time, solidified by habits and reputation. In other words, one’s character is made of the qualities (the “what”) that one holds over time (the “who”), making you who you are. Someone with a brave character has a self-identity formed and advanced by, for instance, bravely telling the truth even when it causes controversy. The person who can be trusted to do so over and over again, who has the habit of telling the truth in all circumstances, has an honest character. Character is a structure—defining communities as well as institutions—that comes before intentions, shapes intentions, and forms our responses to events. It makes us who we are. A good character contains those qualities of self-identity we individually and collectively value.

What character traits are required to engineer the climate?

But what are the qualities of character that we should seek for climate engineering? For sake of discussion, we suggest three habits of character that might play a role: responsibility, humility, and justice. We illustrate these qualities with reference to three different religious traditions, although none is exclusive to just one religion.

A core belief in Buddhist traditions is that all actions have consequences, and anyone who intentionally acts will live out the consequences of that action. This idea, summarized by the oft-misused word karma, affects one’s character: a good person is one who accepts and anticipates responsibility for his or her actions. The Dalai Lama applies this lesson to global climate change in a short essay titled “Universal Responsibility and the Climate Emergency.” He asserts that “environmental disasters—Atlantic hurricanes, wildfires, desertification, retreat of glaciers and Arctic sea ice—these can be seen as [earth’s] response to our irresponsible behavior.” He goes on to say that as the people who contributed to this state of affairs and are beginning to come to grips with it, “We ourselves are the pivotal human generation” with responsibility to halt and repair environmental degradation. Our careless actions and those of previous generations have disrupted the natural world; we now live with the consequences. Whatever actions we take to undo this damage, the entire planet will live with the results. Good character means accepting that responsibility, and so any consideration of climate engineering should consider whether those undertaking and overseeing the actions have a properly responsible character.

Foundational to the Islamic faith is the statement that there is no God but God, and an important moral lesson extracted from this is that no human being is God. No person who seeks to take God’s place by controlling or directing the world can be trusted. In 2015, a group of Muslims adopted a statement applying this lesson at the International Islamic Climate Change Symposium in Istanbul. They emphasized the importance of humility. Human beings must take action on climate change in a way that recognizes our limitations, our past failings, and our inability to completely prevent future problems. No person is or can be perfect. The statement underlines this with a quote from the 17th surah of the Qur’an:

Do not strut arrogantly on the earth.
You will never split the earth apart
nor will you ever rival the mountains’ stature.

The statement also instills humility by insisting that humans have caused harm when attempting to “strut arrogantly,” particularly through the “unwise and short-sighted” use of fossil fuels. Humans have arrogantly failed to care for the planet, and any attempt to resolve this problem will need to nurture the kind of humility that prevents further harm and damage. Climate engineering requires a properly humble character.

A primary ethical concern in Christian communities is the commitment to justice, to the equitable treatment of all, the fair distribution of goods, and compensatory attention to those who have been treated unfairly. Christianity teaches that justice is not just a political idea, but is also a habit of character: people learn to behave justly, and habitual justice is part of what it means to grow as a moral person. In his encyclical letter on “care for our common home,” Pope Francis applies this lesson to the problem of climate change. He writes that environmental issues are always also issues of justice, that the human race faces “one complex crisis which is both social and environmental.” This leads him to observe that climatic changes will affect the most vulnerable people and creatures the worst, a particularly unfair dynamic since they have played the smallest role in creating the problem. To stop this injustice, Francis calls all people to cultivate “new habits,” to build toward “healthy politics,” and to spread a global appreciation of “integral ecology.” These characteristics define the character of a just person and a just community, which will be absolutely essential if climate engineering has any hope of responding not merely to the warming world, but also to the inequities of the problems it creates. Climate engineering requires a properly just character.

We do not wish to argue that these particular habits are the only ones, or even the best, when it comes to the qualities of character that will be needed in the climate engineering research community. Rather, our larger goal is to insist that some consideration of character itself is essential. Delineating what it would mean for the climate engineering community to have a good character is a concrete, worthwhile discussion, and a necessary process if we are to address the ethical and governance issues of climate engineering. Likewise, it will be important to define qualities of bad character, which would threaten the success of climate engineering on moral and political grounds.

Do these character traits work in practice?

It is one thing to argue that a moral concept such as character is applicable to a situation, but another thing to show that it is useful in concrete assessments of events and actions. What would it look like to consider character in the course of climate engineering research? Can these considerations be consistent and operational enough to be part of evaluations of climate policy? We offer tentative proposals as discussion starters.

First, to provide a clue as to how researchers’ character might manifest responsibility, proposals for research and field tests should include a detailed account of the problem being addressed, as well as the moral and social impacts of the plan. Any research proposal explains assumptions and understanding about the system with which it interacts. For marine cloud brightening, this might include not only the atmospheric system over oceans and the technology required to alter clouds, but also the political and social systems that make such intervention necessary. A danger of climate engineering is that technological intervention could simply replicate the ideological causes of climate change—that is to say, climate engineering could be an avoidance of responsibility instead of an acceptance of it. Therefore, researchers should demonstrate a responsible character by providing thoughtful analyses of how the proposal not only intervenes productively in the climate problem, but also builds capacity to deal responsibly with the scientific and societal consequences of such intervention. A proposal that includes naive evaluation of moral issues and social impacts might suggest that the aspiring engineers should study the problem they seek to solve more deeply before being confident of their ability to solve it. Such social analysis is not always a part of scientific and technical training, and so they should seek the expertise of philosophers, ethicists, and theologians. Scholars of religion are trained to critically evaluate the systemic and ideological foundations of climate change. Such evaluation must be part of a responsible climate engineering research community.

The habit of humility may be harder to evaluate consistently, but doing so will be nonetheless vital. Those proposing to research or undertake climate engineering need to consider the limitations of technologies, the limits of understanding that they bring to their work, and their own fallibility. This is particularly important given how ill-equipped existing governance and political structures are to handle the complexity of climate engineering. Proposals should embrace the need for humility in the face of the risks and uncertainties: What is unpredictable in this experiment? If it does not work, what will the cost be and who will bear it? If it does work, could it be misused to cause harm? If so, how preventable is such misuse? These are questions that need answers, and the process of answering them will reveal something about the climate engineer behind the proposal. One who does not take such questions seriously might not have the humility required to do so well.

Finally, justice should be central to any discussion of climate engineering. A research program will exhibit just habits of character if it is scientifically and politically constructed to prioritize, where possible, the poor and marginalized who already are suffering from climate change. This may be difficult in the early stages of research that primarily involves testing atmospheric processes, but becomes a much more central issue for tests of climate response to engineering. At this point, a just research program will place these stakeholders in important directorial or advisory capacities, giving marginalized communities a significant role in evaluating risks and, ultimately, in deciding whether or not to use the technology. Implicit in this process is considering who has been and will be part of the decision-making processes, and who might be left out. In the longer term, if a process for managing solar radiation works and deployment is considered, can the technology and its distribution be constructed so the appropriate people have economic and political authority over deciding whether and how to implement it?

There are no simple and uncontroversial tests for character, but character is nevertheless a vitally important part of any consideration of climate engineering. Indeed, the fact that character is so difficult to quantify could be useful, because it will ensure that complex decisions about climate engineering will never be made based on scientific facts and political realities alone. To take character seriously, climate engineers will need to engage ethicists, citizens, and faith communities in their work. This may take longer than if a small group of scientists and engineers simply acted on its own, but the only way to respond to climate change responsibly, humbly, and justly is to recognize that no one can do this alone.

Is Precision Medicine Possible?

The future of health care is increasingly promised to rest on “precision” genomic medicine, based on the idea that what we are is in our DNA. The pervasiveness of this belief can be seen in the National Institutes of Health’s Precision Medicine Initiative, which promises to bring “precision medicine to all areas of health and healthcare on a large scale.” The promise in turn rests on the view that genes cause disease and that identifying them will allow doctors to predict an individual’s future disease and customize treatment precisely. The promise of “precision” suggests that we need exhaustive enumeration of genetic variants, requiring essentially open-ended projects with enormous samples—”Big Data” collected explicitly without being based on specific hypotheses.

This research paradigm is conceptually locking up ever more of the nation’s investment in biomedical science. It’s fair to ask, however, what justifies the underlying vision. Does what we’ve learned about genetics in the past half century support this promise of precision? If not, are there better ways to direct our finite fiscal and intellectual research resources?

Many diseases and other human traits are predominantly caused by single genes. Indeed, it was such traits in pea plants that led to our basic understanding of genes. But a century’s research trajectory has now shown that most traits, including common diseases such as diabetes, autism, and schizophrenia, or even just height, are due to the effects of not one but tens or even hundreds of genes, and every case is different. We can understand why this is so, but how should research be redirected as a result of this knowledge?

Ideas as well as organisms evolve. What we think today is a product of what was thought yesterday. For historical reasons, genes have become the iconic idea of the causes of our being: every day we hear that something is “in our DNA.” It is but a small step to the promise of precision medicine. But as we’ll see, the science itself tells us plainly what the politics of science hyperbole and science funding keep hidden: genes are important, but precision medicine is likely a false icon.

Crossing Mendel with Darwin

Let’s begin at the beginning. In 1856, a Moravian monk named Gregor Mendel set out to improve the yield of pea plants. He chose to work with pea traits that were qualitative, that is, that took on distinct states (for example, green/yellow or smooth/wrinkled) that didn’t change over generations. He knew that other traits were less predictable, but the ones he chose bred “true,” which made them reliable for farmers.

In many ways we are what we learn, and when Mendel was a student in Vienna, he heard lectures on a new theory that all chemical elements were multiples of the element oxygen. I think this led him to expect similar units of inheritance, and to attribute the variation he studied to discrete units of causation that he, too, called “elements.”

Science builds on what history provides, and Mendel’s work provided the basis for searching for these units of inheritance, which were later named “genes.” It is important that they were seen as units, because that expectation allowed the early geneticist Thomas Hunt Morgan and others to show that they corresponded to specific locations along chromosomes in the nucleus of cells. That in turn led others to show that genes were codes for specific proteins, the fundamental causal units of life. The code works because genes are strings of individual elements called nucleotides, of which there are four different kinds, whose sequential order in specific locations on a chromosome specifies, among other things, the order of a string of amino acids that will be assembled in the cell to make a specific protein. Variation in these codes arises among humans and other organisms because of mutation, changes in the nucleotide string that occur from time to time.

But there was already a fly in the causative ointment, and that fly was Mendel’s British contemporary Charles Darwin. His idea was that organisms evolved in a gradual flow of continuous, quantitative change, in which the contributions of the traits of parents blended to form the traits of their offspring. That seemed incompatible with the apparent permanence and qualitative nature of Mendel’s Elements that definitely did not blend.

By the 1930s, biologists had come to understand Darwin’s gradualism as the net effect of contributions of countless separate, but individually very small, Mendelian effects. This synthesis opened the door to modern genetic investigation. Decades of success at finding genes, their specific locations on chromosomes, their coding functions, and the regulation of their expression—when and in what cells a gene is used—followed. An important result of these advances is the view that what we are is affected by our genes, our individual sets of these causative points. Technologies based on this assumption have enabled us to do genomewide mapping, that is, using various statistical methods to search the genome—our 23 pairs of chromosomes (one set inherited from each parent)—for locations in which DNA sequence variation among individuals is associated with a trait of interest, such as a specific disease, or range of blood pressure, height, or even some purported measures of behavior or intelligence.

Interpreted through the lens of today’s computational Big Data worldview, mapping has led to the belief that wholesale enumeration of these causal points, on a genomewide scale, will lead us out of the wilderness of causal incomprehension and into an era of precise understanding of genes and their actions. Supported by massive funding, we geneticists have gotten our wish: a tsunami of data that will, we still insist, be the source of the “precision” in precision medicine.

Genetic dimensions

But I wonder if this may more properly be viewed as a failed success. That’s because the data are revealing that what we wanted to find, and thought would be simple enough, is generally not how life works. To see what I mean, it will help to think figuratively, and perhaps in some ways literally, in terms of causal dimensions.

Genetic variation in any species, including humans, is the result of population history. That history is a process of descent, with genetic transmission that connects individuals and their functions generation after generation. An essential aspect of population history is heritable variation, whose ultimate source is genetic, that is, mutations in our DNA.

Because genes have specific locations on chromosomes, it is tempting to liken genetic variation to points of causal light that are either on or off, green or yellow. This was essentially Mendel’s bright idea in choosing the traits he would study in peas. That assumption lets us focus on each trait’s causal point and ignore the rest of the genome. But for the complex, later-onset, and environmentally affected diseases, whose genetic basis is the subject of the big-data swoon today, that assumption usually doesn’t hold. Obvious examples of non-Mendelian, non-discrete traits are heart disease, obesity, height, weight, intelligence, schizophrenia, blood pressure, late-onset diabetes—and perhaps even the tendency of some of us to write cranky assessments of the situation. Even though parents and offspring resemble each other for such traits to some degree, none have Mendelian two-state point causes.

Furthermore, treating genes as individual points ignores important aspects of how genes are used. The human genome is home to tens of thousands of different genes, but that’s not all: short DNA sequences near each gene control that gene’s expression, that is, when and in what cells the gene is expressed. These sequences are binding sites for the assembly of tens of regulatory proteins. The use of a given gene depends on the arrangement of these nearby sites along the line of nucleotides that is a chromosome, and that adds a linear dimension to what otherwise might seem to be a string of independent causal points. In fact, chromosomes contain many other types of sequence strings, whose functions depend on their location and arrangement along the chromosome.

A rather amazing fact is that all of our cells contain the same genome, which we inherited from our parents, but we are differentiated organisms with many different tissues and organs, and even each tissue does different things under different conditions. That means that gene expression patterns vary cell by cell and tissue by tissue: a gene isn’t always just “on” or “off,” but must instead respond to context and circumstances that change. This adds a time dimension of genetic causation. But there is more.

Population history generates webs of redundancy in genetic functions, meaning that the causal space is so complex that many different genetic pathways, involving different genes or patterns of gene usage, can achieve similar results. Indeed, perhaps the most important finding of gene mapping studies so far is the extent to which many different genotypes yield similar traits such as stature, blood pressure, diabetes, or intelligence. In general, no two individuals have the same trait for the same genomic reason. The contributions of individual causally related variants are elusive, because the variant’s effects are typically very small, the variant is rare, or both, and these vary among human populations.

Further complicating this picture is that mutations arise in the cells of each of the tissues in our body during our lives. These mutations are transmitted within the tissue when the cells divide, and they can affect the cells’ behavior, sometimes quite seriously. Cancer is the clearest example. However, these are called somatic mutations because though they can affect our traits, they are not in the germline (sperm or egg cells) transmitted from parent to offspring. Yet it is that inherited genome sequence on which mapping is typically based because it is presumed to apply to all of a person’s cells.

Genomes have codes for a repertoire of regulatory genes, whose coded proteins bind DNA near some other gene to affect that gene’s expression. But this is not a one-for-one kind of control. Instead, a regulatory protein is typically used in many different contexts. What makes regulation gene-specific is the combination of these factors that binds to nearby DNA to regulate the gene’s usage. This is like using some common keywords together to get a combination that yields a precise hit in a Google search. And since the regulatory genes are themselves each coded in a different place in the genome, their assembly in locally specific combinations elsewhere in the genome requires them to navigate to get there, and that makes genetic action three-dimensional within the cell. We may also think of action by combination as adding even another, rather abstract logical dimension to the genetic causal landscape.

More knowledge, less precision

Despite a steady drumbeat of promises linking genomics to precision medicine, mapping studies show, in exquisitely clear detail, the opposite of what would be needed to fulfill those promises: rampant causal imprecision. Furthermore, we are seeing only the proverbial tip of the genomic causal iceberg because mapping done to date has mainly involved Europeans. Yet we know very well that mutations are always arising in every local area, and those few that spread to geographically distant locations take countless generations to do so. That means that much if not most of the causation of the same trait will vary among populations: what we find in Europe will apply only partly to other places, meaning that even if the idea that genomic Big Data prediction would work, separate large-scale mapping studies would be required in each place.

However, even this is far from the most disturbing aspect of genetic complexity and its unpredictability. The deluge of variation specifically identified by mapping typically accounts for only a fraction—usually a small fraction—of the trait’s overall heritability, that is, the estimated part of its overall causation that is genetic. The unaccounted part involves genetic variants I referred to above with weak or rare individual effects. This “leaf litter” of countless unidentifiable individually minor variants will vary among individuals and populations. Identifying more of these sites is a typical rationale for requesting funds for expensive mapping studies involving hundreds of thousands of people. Increasing sample sizes and numbers of studies will mainly just add to the inexhaustible cacophony of variation that we find.

We should not be surprised by this. A major reason for the plethora of rare and weak effects is not the Darwinian one of relentless competition and hence precisely eagle-eyed natural selection. That is a convenient ideology that doesn’t fit the reality. Natural selection quickly favors strong positive effects that assure survival, and removes harmful ones, but is hard-pressed to discriminate among a multitude of tiny effects of less than existential significance. Instead of competition, the typical weak effects of individual mapped sites is more likely due to what has passed the screen of cooperation among gene products during the development of the embryo: by and large, what is born healthy already must basically function properly.

The variants that are survivable and hence available for mapping to find will generally have the residual weak genetic effects that are compatible with life. And of course the plethora of weak effects is by far most of what has been found. And you might notice that so far I have not yet even mentioned the environment, and our lifestyles, which can hardly even be measured accurately and yet which affect many if not most of our traits.

Could mapping now mainly be a very expensive exercise in chasing rainbows? It has successfully revealed something about biological causation that goes beyond the arrays of genes along a chromosomal line as individual, independent point causes with nearby regulatory elements. These complexities seem to throw into question the very idea of genomics-based precision medicine, fundamentally related to the concept of precision itself, about which more below.

And yet there are still deeper challenges. What connects and coordinates the tens to thousands of factors that contribute to a trait we are trying to understand? The mechanisms underlying the complex interactions among these factors remain largely unaccounted for. I think this at least raises the possibility that our causal landscape has some additional unrecognized dimensionality.

Spooky action at a (very short) distance

Albert Einstein famously couldn’t accept that physical effects could occur super rapidly at long distances in space and time—an idea called “entanglement.” He called it “spooky action at a distance.” However, he was wrong: in physics, distant effects really are important. In genetics, too, we can ask how interactions are accounted for over the very short distances across the genomic causal landscape.

Gene action is organized into networks, in which one gene activates or inhibits one or more other genes, which in turn affects yet others. In any cell, at any time, many networks are active, with thousands of genes being expressed. Each of these genes’ local chromosome region is bound by an appropriate combination of regulatory proteins. Do these molecules simply dart randomly around so rapidly and at such suitable concentrations that they all more or less automatically find each other fast enough and in the right combinations to trigger the right gene expression for that cell, just by chance? Or might something else be needed for our understanding, some other dimensional glue, to attract each complex of factors to its appropriate place and time? If such phenomena exist, they will not be found by enumerating countless weak variants in endless megastudies. A few examples of communication at a distance will show what I mean.

Our genomes contain hundreds of genes that enable us to smell different odors. These genes are located in clusters of varying numbers of the genes, and the clusters are scattered on almost all of our chromosomes. Yet, despite having hundreds to choose from, each odor-detecting cell in our nose uses just one of these genes. An odor molecule sniffed in will be detected only by cells expressing some particularly suitable detector gene, which sends a very specific “I smell it!” message to the brain, and it is the combination of signals that makes “lemon” something we can identify. What kind of communication within the nucleus selects just one of these odor-detecting genes in each cell in the lining of the nose, shutting down all the others?

Another example of currently unexplained communication in our cells is that when a gonadal cell is about to divide to produce a sperm or egg cell, the two copies of each chromosome (one that was inherited from each parent) line up with each other, and they then separate as the cell divides, so that the resulting cells each contain just one copy. How do they find each other in the nucleus, to align in this way?

Not all gene regulatory networks work within individual cells. Complicated organisms like us exist because there is communication among the cells in and between our different tissues and organs. The way this works is that cells monitor their external environment, detecting and responding to signal molecules that were produced by other cells in the body. That is how hormones work. The communication is two-way, receiving cells also sending molecules to signal other cells, and cells may even monitor the relative amount of different signal molecules that are passing by. This means that gene action, and its results, cannot be understood from looking at gene usage in the dimensions within cells alone. This may not be spooky, but it is action at a distance.

How do thousands of different genes scattered across the chromosomes become activated at any given time, based on signals they generate or detect from outside? The nucleus may seem like a bowl of spaghetti in which the many chromosomes are freely floating around on long, tangled strands. But clearly the thousands of molecules and their combinations avoid being so tangled as to interfere with their local production and assemblies, or with their impressive ability to vary within cells as circumstances change and among cells of different types that, nonetheless, are doing this with copies of the person’s same genome. Something must be organizing this four-dimensional space-and-time pattern whose changeability shows that it must include contingency standby factors.

Could something other than sequences of gene-by-gene activation—some as yet unidentified causal dimensions—be involved? It is at least fair to suggest that our deeply entrenched enumerative view of genomics as the means of revealing causality is blinding us to other possibilities, by locking us in research pathways, and their associated policy implications and promises, that may be past their prime.

Unknowable unknowns

We naturally hunger for frameworks that explain the world, but these often become dogmas. A dominant genetic thought-mode today rests on the enumeration of point causes. Yet, and despite the gravitational pull of its history, I’ve tried to use the idea of complex causal dimensions to show there are reasons to doubt that an understanding of biological traits, much less their precision prediction, can be reached by racing down the enumerative Mendelian track that dominates genetic science today. We should use what tools we have to be as precise as possible, of course, but that’s not what the word “precision” really means, as we should understand if better policy is our goal.

The classical criterion for genetic control of a trait was its presence in families in specific Mendelian patterns. Those inheritance patterns distinguish genes from other factors in our lives. But adequate samples of multiply affected families are hard to come by, and one of the rationales for Big Data mapping was that common diseases are caused by common variants with strong effect that could be identified in huge population samples without the need for chasing down sets of families.

There was never a good reason to believe that this would work, and mapping has clearly confirmed that simple genetic causes are not generally responsible for common diseases. Indeed, in an ironic twist, some mapping investigators are now defending megascale projects by stressing the importance of searching for rare, not common, variants with strong enough effects to be detected, believe it or not, in occasional families. There will of course be some successful searches, for reasons that were clear a century ago, because some rare genetic variants do have strong effect on their own. But by far most cases of common diseases are not each caused by a different single-gene effect.

There’s a deeper point, too. The idea of precision implies that there is a truth out there, and as we make our measurement instruments better, our estimates will approach that truth ever more closely. That works when what we want to measure has a true value the way, say, the speed of light does. Could the comparable medical fact be that a genetic variant has some true probability of causing a disease, rather than always doing so? Unfortunately, not even this is so, and for a subtle reason.

We can see this by asking why, if the variant is a cause, doesn’t the disease always result when the variant is present, rather than only with some probability? The reason is that the outcome depends not on that variant alone, but on the combination of many additional factors that I’ve discussed that are also present. Not only is that combination different for each person, but many of those factors, such as mutation and lifestyles, will arise in the future, and are not predictable, not even in principle.

This clearly means that we have no way to know how imprecise our predictions are. And that in turns shows why the basis for the promised genomic precision medicine simply does not exist, no matter how much we might wish otherwise.

It is reasonable to ask whether the complexity of genetic causation is a new discovery. Let’s look back nearly two decades, to the year 2000. In two books widely read by both scientists and the general public, Richard Lewontin and Evelyn Fox Keller noted the iconic status that “genes” had attained, and they warned about oversimplifying or centralizing the role of genes in life, ignoring or downplaying both the organisms that contain genes and the environments in which organisms must function. In that same year, the geneticist Joseph Terwilliger and I cautioned about these issues specifically in the context of the then-blooming romance with genomewide mapping.

Even earlier, in 1993, I concluded my own book by noting “enumeration is … a rapidly obsolescing way to think about the relationship between genotype and phenotype.” As I then said, the ultimate goal should be synthesis, and I’ve tried here to explain the nature of the genomic causal landscape that we need to confront if we are to go beyond enumeration.

The sometimes cosmic-scale complexity of the possible interactions within and among genomic dimensions is out there for all to see. Thoughtful geneticists understand these things perfectly well. So, on what basis can we promise precision predictability from DNA sequences?

Unfortunately, much of the answer is that the reality of improving the yield of publicly sponsored science is about the money, not the science. Underlying that reality is that when scientists must get their very salaries, and universities their operating funds, from individual grants, a conservative, defensive, safe, assembly-line, and eventually sclerotic system that always promises future miracles is as inevitable as sunrise. It’s what we have today.

For more, and more flexible, progress to be made, research resources should be moved away from Big Data studies that are too large, open-ended, and entrenched. Major funding change always meets staunch resistance, of course, but there should be no welfare system for geneticists if we refuse one for coal miners. Scientists are capable people who can, and would, adapt to a system that funds more focused and innovative ideas. The same resources could be applied in better ways to ease human suffering more directly and increase the chances of truly innovative discovery.

For starters, a great many life-devastating diseases really are genetic in every sense of the word. Cystic fibrosis, multiple sclerosis, muscular dystrophy, and Huntington’s disease are just a few well-known examples. We should make intensive investment in genetic-engineering technologies to fix such problems. When that has succeeded, as I think it often will, the engineering methods could then be applied to weaker, more subtle genetic effects, although their multiplicity and individually unique combinations mean that may have less probability of success. Meanwhile, we already know that for many or most common, complex diseases, by far the best medicine is prevention, which is about lifestyles, and that is where urgent investment should be made. Without toxic lifestyles, the remaining cases really would be genetic.

Science is hard, all the more so when problems seem urgent, as in our natural desire to prevent or treat disease. We can’t just go on Amazon and order discoveries of a fundamentally new sort that will revolutionize the future of medicine. If changes in the research funding system relieved investigators from the relentless scramble for funds, they could be freed to do truly creative work. If that creativity were informed by the challenges and opportunities in the clinical setting, rather than by the technological imperative to keep sequencing genes, then the grip of Mendelian fundamentalism might be loosened. A way should be found to shift funding toward more, longer, even if smaller grants, to support projects where scientific creativity brings together learning at the bench and the bedside.

The resulting freedom would enable projects to be more diverse and less safely me-too. Most ideas will fail, because that’s how science is. But some will almost certainly succeed, and yield bigger rewards for human knowledge and well-being than churning out more of the same. Change can be difficult, but life itself has evolved through change. That is a lesson we could apply to the evolution of the research enterprise, too.

“This Essentially Meaningless Conflict”

Marilynne Robinson’s accomplishments are impressive by any standard: she has won the National Book Critics Circle Award for Fiction, the Pulitzer Prize for Fiction, and the National Humanities Medal, among other honors. But perhaps a better measure of her eminence as a writer and thinker for our times is this: When the New York Review of Books ran an extended interview with Robinson in November 2015, her interviewer was … President Obama.

Robinson’s fiction and essays display a combination of fierce intelligence and profound human empathy. Her four novels are at once gorgeous, revelatory, and lapidary; her essays, ruthlessly clear and often deeply challenging. At the heart of her work is her Christianity, and from there she explores everything from the prospects for democracy to the role and limits of science in our lives. She is equally comfortable, eloquent, and convincing in discussions of cosmology and the power of the sermon, and she celebrates both science and faith as expressions of our humanity.

We interviewed Robinson via e-mail, and our questions referred specifically to three of her works: her 2004 Pulitzer Prize-winning novel, Gilead, narrated by the elderly preacher John Ames, and the essays “Proofs” and “Humanism,” from her 2015 collection, The Givenness of Things. Her responses, which offer only a glimpse of the warm and penetrating brilliance of her thinking and writing, highlight a perspective that we wish were more broadly available in efforts to explore the interactions and intersections of science and religion. If, she suggests, one views science as a skeptical, questioning mode of inquiry “whose terms and methods can overturn the assumptions of inquirers,” then it can be neither a threat nor an alternative to religion. After all, there are no possible scientific tests for the reality of soul, self, or God. She holds science to a strong standard of integrity while insisting that the concepts of science “are beautiful in their own right.” This rigorous and generous way of understanding things points the way toward a harmony that is both intellectually and emotionally satisfying.

In “Proofs,” you write, “We have made very separate categories of science and learning on one hand and reverence for the Creator on the other.” Was there ever a time when these categories were easily seen to be closely related? What are the main ways in which this separation came about, do you think, and how has it come to be so powerful?

First of all, for the purposes of responding to all these questions, I must object to what I take to be an overly general use of the word “science.” I see a vast, qualitative difference between sciences whose terms and methods can overturn the assumptions of the inquirers, and “science” that simply insists on the truth value of its assumptions. The accelerating expansion of the universe, the great prevalence of apparently non-atomic dark matter, the role of the lysosome in regulating the life of an organism—the list of such surprises is endless, and might be called the history of scientific progress. When a method is not finally captive to prevailing consensus, it is science in the positive sense. It is real exploration.

This other business, which is called neuroscience—again, a word probably applied too generally—proceeds on the basis of dubious thought experiments and vast generalizations based on tiny, wildly atypical sample populations. It relies on notions about genetics that are discredited, and economic concepts (cost/benefit analysis, notably) that are never examined. And it depends on an indefensibly simple anthropology. All this is in the service of its assumptions, which are endlessly reiterated and asserted as if proved. Since the nineteenth century, every type of “brain science” from phrenology on has proceeded from and/or arrived at the same conclusions—no soul, no self, no God. Is there any science properly so-called that would find these to be legitimate conclusions on the basis of anything known, learned, or observed? Is the apparent existence of dark energy relevant to these questions? The apparent existence of gravity waves? No, and what could be? These concepts are beautiful in their own right, not proof or disproof of the ultimate, metaphysical character of Being. That said, they are arguably less irrelevant than any conclusions cost/benefit analysis could yield. It is bizarre that when science is in such a brilliant period its public face is this parasitical “science” that flaunts a prestige earned by work of a very different order, and that takes religion as an adversary because for many generations that’s what its ancestors have done.

Much important early work was done by devout men—Descartes, Locke, Newton, and very many others. Their thinking has been treated as if it banished the sacred from experience, but in fact it invested it much more deeply in mind, perception, and knowledge. Calvin said the brilliance of the human being, felt in dreams, imagination, and learning, and demonstrated in science, was proof of the existence of God and of the divine in human nature. This kind of celebration was characteristic of his period, the European Renaissance. Early science was fascinated with the wonderful capacities of the mind and the wonderful order it discovered in nature. Both of these were seen as God’s providence. It seems that often in history only the polemic against a thought or movement is remembered. This kind of religious experience was treated by its adversaries as atheism. And this image of science became fixed.

In “Humanism” you write, “The notion that the universe is constructed … so that reality must finally answer in every case to the questions we bring to it [i.e., through scientific research], is entirely as anthropogenic as the notion that the universe was designed to make us possible.” Is it your view, then, that the belief—common among scientists—that all things are potentially knowable is itself actually a matter of faith? And if so, is it equally reasonable to view such mysteries as evidence of the work of a higher power? Or is it more that scientific methods of inquiry are simply unequal to the task of understanding such mysteries?

This confidence is a perfectly good beginning place for any inquiry. Its disappointments need never be considered final. It is really better thought of as a stance than as a faith. In any case, the mysteries science encounters arise from the kinds of questions it can pose. Are there multiple universes? Could we ever know how many of them there are, or how many might be inaccessible to us? Assuming that they are significantly unlike our universe, could we ever know any of them comprehensively, or know that we did or did not? I am not making a theological argument when I say that science will no doubt run up against very real limits, though I would expect much collateral insight to come from its attempts. My point is that it is remarkable to ascribe such capacities to the mind, even as potential in it. We have learned in the last few decades that we had overlooked the greater part of the mass of the universe. This is an instance in which we discovered what we had not known. There could be any number of things we don’t know we don’t know. Again, this is not a theological argument. The model nonreligious people have of religion as a way of accounting for things science has not gotten to yet is just nonsense. If the purpose of the maxim about the ultimate knowability of everything is to preemptively seize contested ground from religion, this is nonsensical for the same reason.

In “Proofs,” you also argue that the received distinction between science and religion reflects a failure to understand religion—that religion is not some brainless abdication of critical faculties, but that “[religion], like science, addresses and celebrates mystery—it explores and enacts wonder and wondering.” Would the pursuit of science be enhanced if done with a greater openness to the sources of religious insight? What might this more enlightened version of scientific exploration look like, in a practical sense?

I think scientific exploration as I described it above is just great. It should do what it is doing. This question seems to reflect that entanglement of science with “science.” I will mention the name [Richard] Dawkins to make the point that hostility to religion under the banner of science is the whole object of that exercise. Every criticism I have made of their model of reality, of human nature, motivation, and so on, would still be just as valid if they were somehow to add a tincture of religion to it.

In reading Gilead from a science-and-religion perspective, it’s hard not to get this sense that science was deliberately banished from your telling of John Ames’s life—that Gilead is in part a thought experiment to show that the principal (or highest?) meaning that one can derive from and in life must flow from the sorts of moral and existential reflection that religion allows and sustains—and that science does not. Is religion a more essential foundation for human wisdom and psychological flourishing than science? 

Ames is writing in 1956. Science then was a very different thing. He would have known as much about it as any intellectually curious reader, but much he knew would be superseded by now. I didn’t want to involve myself in anachronism, and I didn’t want him to appear naive, when, by the standards of his time, he would not have been.

Pursuing this idea a bit further, in “Humanism,” you also offer a scalding critique of neuroscience, which you portray as founded on denial of the one thing that we all know to be true—that our individual, subjective selves actually do exist. Your critique would seem to suggest, then, that in being based on an ideological fallacy, neuroscience’s capacity for catalyzing false beliefs is much greater than its promise for yielding lasting insight. Is that a fair reading of your position? What are the implications of your critique of neuroscience for the scientific ideal of freedom of inquiry? 

Neuroscience will do what it will do, and should be willing to stand up to considered criticism of its methods and conclusions. If it makes a better account of itself in future than it has done to this point, excellent. Freedom of inquiry has never meant a loss of the same freedom by people who find a project questionable. I’m surprised to find such a thing suggested in a scientific context. I mention Dawkins because he is an especially voluble instance of the fact that this worldview—and I am not speaking of atheism here, but of the whole rattletrap machinery of his and their particular school of thought—is presented as Indubitable Truth. It is a bad model of science and reasoning. It makes the kinds of claims that surely exist to be tested. You use the word “ideological,” which is striking. Is it ever appropriate for science to be ideological? That may be a part of my unease. Sciences that undercut individuality, like racial science and eugenics, rationalize inhumane ideologies, which in turn support them politically. That said, my criticisms always address their methods and reasoning, areas where opinion or ideology can be put aside.

In Gilead, John Ames seems to espouse many opinions similar to arguments you have made in lectures and essays. Stylistically, however, your fiction and essays are quite different. Do fiction and essays serve different purposes for you in exploring such matters as science and religion? Do you approach them very differently, as a writer? Do you aim them at different audiences? 

I don’t really think about an audience for my fiction. My essays are all lectures, so they are written with the audience in mind that I expect at some particular occasion.

John Ames’s moral, intellectual, and spiritual inner lives are so richly realized. Do you see fiction as uniquely suited to deeply probing this sort of inner subjective reality? Does fiction—with its suspension of disbelief—make the issues you want to explore somehow more accessible to skeptical or secular readers than can be accomplished through nonfiction?

Writing is a very interior experience for me. I’m very happy to have secular readers, but I don’t think about making the work accessible to them or to anyone else.

Above all, your characters are tremendously human; they grapple with faith, but more generally with making complex, difficult decisions. Is this what it means to be human? And is the role we assign to science in our culture changing that in any way, by affecting the ways we understand ourselves and each other? 

To insist again on that distinction—real science is a spectacular achievement, a great demonstration of brilliance that should help us to value and celebrate humankind. “Neuroscience” tells us we have neither mind nor self. This can hardly enhance our value in our own eyes or one another’s.

It seems in many ways that the endeavor of the writer—or any artist—is to explore and translate what in religion is understood to be the soul. Referring back to your view of neuroscience, does this mean that art itself cannot escape conflict with science, in some ways? 

Again, that distinction—no real science offers a judgment about the reality of the human qualities traditionally called the soul. So long as an ideological neuroscience inserts itself into these questions, art and everything we call humanist will be caught up in this essentially meaningless conflict.

No Time for Rubbernecking

Among the most annoying driving experiences is to endure a long traffic jam only to discover that the cause of the delay is not an accident on your side of the road, but rubbernecking at the results of an accident on the other side. We all know that nothing is gained by staring at the wreckage and that it will unnecessarily slow down everyone’s progress, but we can’t help ourselves. The same phenomenon is now evident with political rubbernecking.

Today’s roadside disasters include a front-page New York Times story that President Trump might fire FBI special counsel Robert Mueller and a comical video of a cabinet meeting at which all the members took turns heaping praise on a smirking president. Can Saturday Night Live top this? But the Mueller story is based on the word of an alleged friend of the president who had visited the White House but might not have actually spoken with Trump, and the ritual humiliation of the cabinet was completely devoid of policy substance.

Each day’s headlines seem to include some new deviation from the political norm, some insult to evidence-based logic, some gratuitous insult to a foreign ally, or some other example of intellectual carnage, to borrow a phrase. Their gruesomeness demands our attention, takes time from the work we know we should be doing, and slows national progress on the road to the future. Deep down we know that these tweets, bloviations, and untruths, lacking a foundation of fact or logic, will ultimately crumble under the weight of their own preposterousness. The mass hysteria will dissipate, and we should be ready with a map to sanity.

The car wreck that the research community is currently gawking at is the Trump administration’s proposed FY 2018 budget. Matt Hourihan and David Parkes, who write our From the Hill section, do their usual outstanding job of summarizing the R&D budget news in this issue. The proposed budget is indeed a sight of reckless butchery; the president has taken a cleaver to almost every part of federal R&D activity. Environmental science and alternative energy technology are particularly mangled, but even the Department of Defense, the apparent beneficiary of this wholesale budget priority realignment, would experience cuts to its basic research budget.

The proposed budget is a nightmare, and there is a certain ghoulish pleasure in obsessing over the details. But it is a Washington tradition to announce every year that the president’s budget is dead on arrival, even when the president’s party controls both houses of Congress. This budget is even deader than most. Republican congressional leaders have been unusually blunt in their lack of enthusiasm. The problems begin with glaring arithmetical errors, unhinged assumptions about economic growth, and claims of revenue from sources such as the estate tax that the administration seeks to eliminate. These problems will derail this proposal before the discussion of spending priorities even begins.

Although the nation’s intellectual elite will almost surely take some satisfaction in finding common ground across the ideological spectrum in mocking the president’s indifference to the conventions of governing, the value of the press, and the need for evidence and logic to support public policies, this is not an adequate response. Smug derision is too easy. A more constructive response would be to capitalize on this sudden meeting of the minds to develop a rigorous, less ideological approach to the thorny challenges that face the nation and the world. Although lack of experience and discipline is likely to result in the implosion of the Trump agenda, that in itself is not the ideal outcome. Instead of wasting our time transfixed by the wreckage, we should be building an alternative program for guiding the country.

The articles in this edition of Issues aim to do just that, to tackle the fundamental aspects of science and society that will shape the world’s future direction. Several authors take on the challenges that confront science itself. Sheila Jasanoff provides an insightful historical review of the evolving role of scientific expertise in public affairs and makes a compelling case for a nuanced, transdisciplinary, and collective effort to arrive at some widely shared public truths that can provide a foundation for public policy debates. Richard Harris addresses pervasive evidence of a decline in biomedical research reliability. He acknowledges that research is a process of continuous exploration and revision. As scientists understand, no paper should be considered sacred doctrine, but Harris highlights some of the undesirable incentives in the research system that encourage detrimental research practices that could erode overall scientific integrity. Keith Kloor worries about what happens to scientific disinterestedness in research areas such as climate change and endangered species where the public debate has become highly politicized. Scientists should be engaged in discussions of public policy, but they need to do so without losing touch with the underlying scientific principle of truth seeking. Scientific rigor and openness are what earn researchers respect in public discourse. We should leave the hand-to-hand combat on political maneuvering to the pros.

While the research community is getting its own house back in order, it can focus on the plate tectonics of scientific and technological progress. The research lab might look remote, secluded, esoteric, but it actually provides an essential vantage point for identifying trends and capabilities that will have an earth-shifting influence on the future. Amitai and Oren Etzioni provide a useful context for understanding the progress of artificial intelligence. They distinguish between trends that are an extension of current activities and those, such as autonomous weapons, that deserve close attention because they are qualitatively different from what preceded them. Likewise, Braden Allenby reviews the long history of information warfare and warns that new technology is enabling a capability in “weaponized narrative” that is more efficient, more insidious, and more powerful than what we have managed before. As the world’s leading economic and political leader, the United States is a particularly appealing target for this type of asymmetric warfare.

Social science research can also help us probe deeper and further. Granger Morgan and his colleagues, who were pioneers in the application of benefit/cost analysis to climate and energy policy, have come to recognize that this tool, which has been applied widely and successfully in many aspects of environmental policy, is ill-suited to the assessment of greenhouse gas emissions because of their extraordinarily long residence time in the atmosphere. When looking ahead fifty or a hundred years, a very small adjustment in the discount rate used to measure an investment’s value in the future can have an enormous impact on the result. Thus, with a little tinkering, one can produce almost any result one wants regarding the social cost of emissions. They go on to explain why a fundamentally different approach is therefore needed to guide climate policy.

Ted Nordhaus taps the insights of social science to explain why very desirable and very impressive improvements in energy efficiency technology will not deliver the overall energy savings that they promise. Energy saved by end users by buying more efficient light bulbs and air conditioners tends to be plowed back into more energy-intensive sectors of the economy such as construction and manufacturing. In the short term this means that efficiency may not be a good tool for significantly reducing greenhouse gas emissions. But the story doesn’t end there. The production sectors of the economy that benefit from efficiency gains also help drive economic growth worldwide, and Nordhaus sees a future when the world reaches a stage of development at which efficiency actually will begin reducing overall energy use—but we’re not there yet. David Ropeik delves into the complicated psychology underlying public opinion toward nuclear power. He concludes that it is the dynamics of group identification—not economics, safety, or climate concerns—that is the key factor in how people perceive the technology.

The humanities, too, have insights to share. Philosopher Evelyn Brister examines how deeply held beliefs about the natural world are at the heart of debates about genetic engineering and ecological balance.

Science and the humanities alike are founded on the use of evidence and logical argument. They seek to avoid the unfounded assumptions, knee-jerk reactions, and group-think that have come to dominate much political discourse. Neither of them provides simple or easy answers to society’s questions, and neither of them provides the surge of adrenalin that comes with viewing a disaster. We’ll never be able to completely resist the rush that comes with seeing car parts or common sense spread across the asphalt, but the sooner we can return our attention to the road in front of us, the better the chance that we will arrive at our desired destination.

Publish and Perish

Two laboratories thought they’d found the perfect workaround to the ethically thorny issue of using stem cells from human embryos for research. In 1999 and 2000, they reported that they’d figured out how to convert bone marrow cells into many different kinds of tissues.

The field went wild. Within just a few years, by biologist Sean Morrison’s count, hundreds of labs reported exciting results where bone marrow cells “transdifferentiated” into many useful varieties. Scientists scrapped their ongoing research plans to dive into this rapidly growing field.

But was it real? Amy Wagers and colleagues at Stanford University decided to find out. They ran a series of carefully crafted experiments and concluded in 2002 that transdifferentiation of bone-marrow cells essentially didn’t exist (beyond the cells’ well-known ability to change into various types of blood cells).

The entire endeavor popped like a soap bubble.

“This episode illustrated how the power of suggestion could cause many scientists to see things in their experiments that weren’t really there and how it takes years for a field to self-correct,” Morrison wrote in an editorial in the journal eLife.

Morrison, a Howard Hughes Medical Institute investigator at the University of Texas Southwestern Medical Center, wasn’t simply concerned about the effort wasted in this one line of research. He is concerned that problems such as this pervade the biomedical literature and contribute to what’s become known as the “reproducibility crisis.”

I’d been covering science for 30 years for National Public Radio, and as stories such as these began to accumulate, I decided to spend a year systematically investigating the problem of poor-quality science. I describe what I learned in my new book, Rigor Mortis: How Sloppy Science Creates Failed Cures, Crushes Hope and Wastes Billion. And though the problems I uncovered made it clear that the biomedical research system faces serious challenges, I was also surprised and inspired by the openness with which almost everyone I interviewed was willing to talk about the crisis, its origins, and possible solutions.

But which half?

By some accounts, as much as half of what gets published in the biomedical literature is deeply flawed, if not outright false. Of course, we should not expect perfection from research labs. Science is by its very nature an error-prone enterprise. And so it should be. Safe ideas don’t push the field forward. Risk begets reward (or, of course, failure).

Only a few studies have tried to measure the magnitude of this problem directly. In one, scientists at the MD Anderson Cancer Center asked their colleagues whether they’d ever had trouble reproducing a study. Two-thirds of the senior investigators answered yes. Asked whether the differences were ever resolved, only about one-third said they had been. “This finding is very alarming as scientific knowledge and advancement are based upon peer-reviewed publications, the cornerstone of access to ‘presumed’ knowledge,” the authors wrote when they published the survey findings.

In another effort, the American Society for Cell Biology surveyed its members in 2014 and found that 71% of those who responded had at some point been unable to replicate a published result, and they reported that 40% of the time the conflict was never resolved. Two-thirds of the time, the scientists suspected that the original finding had been a false-positive or had been tainted by “a lack of expertise or rigor.” The society adds an important caveat: of the 8,000 members surveyed, it heard back from only 11%, so its numbers aren’t convincing. That said, Nature surveyed more than 1,500 scientists in the spring of 2016 and saw very similar results: more than 70% of scientists who responded had tried and failed to reproduce an experiment, and about half of those agreed that there’s a “significant crisis” of reproducibility. Only 10% said there was no crisis at all, or that they had no opinion.

“I don’t think anyone gets up in the morning and goes to work with the intention to do bad science or sloppy science,” says Malcolm Macleod of the University of Edinburgh. He has been writing and thinking about this problem for more than a decade. He started off wondering why almost no treatment for stroke has succeeded (with the exception of the drug tPA, which dissolves blood clots but doesn’t act on damaged nerve cells), despite many seemingly promising leads from animal studies.

As he dug into this question, he came to a sobering conclusion. Unconscious bias among scientists arises every step of the way: in selecting the correct number of animals for a study, in deciding which results to include and which to toss aside, and in analyzing the final results. Each step of that process introduces considerable uncertainty. Macleod said that when you compound those sources of bias and error, only around 15% of published studies may be correct. In many cases, the reported effect may be real but considerably weaker than the study concludes.

These problems are rarely deliberate attempts to produce misleading results. Unconscious bias, like the wishful thinking that drove the transdifferentiation frenzy, is a common explanation. That’s partly a consequence of human nature.

“We might think of an experiment as a conversation with nature, where we ask a question and listen for an answer,” Martin Schwartz of Yale University wrote in an essay titled “The Importance of Indifference in Scientific Research,” published in the Journal of Cell Science.

This process is unavoidably personal because the scientist asks the question and then interprets the answer. When making the inevitable judgments involved in this process, Schwartz says, scientists would do well to remain passionately disinterested. “Buddhists call it non-attachment,” he wrote. “We all have hopes, desires and ambitions. Non-attachment means acknowledging them, accepting them and then not inserting them into a process that at some level has nothing to do with you.”

That is more easily said than done. As physicist Richard Feynman famously told a graduating class at Caltech as he talked about the process of science, “The first principle is that you must not fool yourself—and you are the easiest person to fool.”

235 reasons why

And there’s no shortage of ways to go astray. Surveying papers from biomedical science in 2010, David Chavalarias and John Ioannidis cataloged 235 forms of bias, which they published in the Journal of Clinical Epidemiology. Yes, 235 ways scientists can fool themselves, with sober names such as confounding, selection bias, recall bias, reporting bias, ascertainment bias, sex bias, cognitive bias, measurement bias, verification bias, publication bias, observer bias, and on and on.

But though biases may typically be unconscious, this is not simply a story of human nature. Scientists are also more likely to fool themselves into believing splashy findings because the reward system in biomedical research encourages them to do so.

Some of the pressure results from the funding crunch facing biomedical research. The National Institutes of Health budget doubled between 1998 and 2003, leading to a vast expansion of the enterprise. That included a 50% increase in biomedical lab space at universities. But in 2003, Congress stopped feeding the beast. Adjusting for an inflation rate calculated for biomedical research and development, funding declined by 20% in the following decade. That pressure means that less than one in five grants gets funded. And that creates an incentive for scientists to burnish their results.

“Most people who work in science are working as hard as they can. They are working as long as they can in terms of the hours they are putting in,” says Brian Martinson, a sociologist at HealthPartners Institute in Minneapolis. “They are often going beyond their own physical limits. And they are working as smart as they can. And so if you are doing all those things, what else can you do to get an edge, to get ahead, to be the person who crosses the finish line first? All you can do is cut corners. That’s the only option left you.”

Martinson was a member of the National Academies of Sciences, Engineering, and Medicine’s committee that in April 2017 published a report on scientific integrity. It updated a report produced 25 years earlier. According to committee member C. K. Gunsalus, the previous study focused on the “bad apples” in research—those few scientists who were actively engaging in inappropriate behavior. The 2017 study looks more closely—as Gunsalus puts it, at the barrel itself and the barrel makers—to focus on the incentives that are driving scientists toward conclusions that don’t survive the test of time.

One of the central problems revolves around publishing. Top journals want exciting findings to publish, because hot papers bolster their “impact factor,” which ultimately can translate into profits. University deans, in turn, look to those publications as a surrogate for scientific achievement.

Veronique Kiermer served as executive editor of Nature and its allied journals from 2010 to 2015, when this issue came to a boil. She’s dismayed that the editors at Nature are essentially determining scientists’ fates when choosing which studies to publish. Editors “are looking for things that seem particularly interesting. They often get it right, and they often get it wrong. But that’s what it is. It’s a subjective judgment,” she told me. “The scientific community outsources to them the power that they haven’t asked for and shouldn’t really have.” Impact factor may gauge the overall stature of a journal, she said, “but the fact that it has increasingly been used as a reflection of the quality of a single paper in the journal is wrong. It’s incredibly wrong.”

The last experiment

Sometimes gaming the publication system can be as easy as skipping a particular experiment. Olaf Andersen, a journal editor and professor at Weill Cornell Medical College, has seen this type of omission. “You have a story that looks very good. You’ve not done anything wrong. But you know the system better than anybody, and you know that there’s an experiment that’s going to, with a yes or no, tell you whether you’re right or wrong,” Andersen told me. “Some people are not willing to do that experiment.” A journal can crank up the pressure even more by telling scientists that it will likely accept their paper if they can conduct one more experiment backing up their findings. Just think of the incentive that creates to produce exactly what you’re looking for. To Kiermer, the former Nature editor, “That is dangerous. That is really scary.”

Something like that apparently happened in a celebrated case of scientific misconduct in 2014. Researchers in Japan claimed to have developed an easy technique for producing extraordinarily useful stem cells. A simple stress, such as giving cells an acid bath or squeezing them through a tiny glass pipe, could reprogram them to become amazingly versatile. The paper was reportedly rejected by the journals Science, Nature, and Cell.

Undaunted, the researchers modified it and then resubmitted to Nature, which published it. Nature won’t say what changes the authors had made to enable it to pass muster on a second peer review, but the paper didn’t stand the test of time. Labs around the world tried and failed to reproduce the work (and ultimately suggested how the original researchers may have been fooled into believing that they had a genuine effect). RIKEN, the Japanese lab where the research was done, retracted the paper and found the first author guilty of scientific misconduct. Her widely respected supervisor committed suicide as the story unfolded in the public spotlight.

There is no question that the pressures built up in the system are having a corrosive effect on the output from scientific labs. But Henry Bourne, an emeritus professor at the University of California, San Francisco, also believes scientists themselves need to change. “I think that is what the real problem is—balancing ambition and delight,” he told me. Scientists need both ambition and delight to succeed, but right now the money crunch has tilted them far too much in the direction of personal ambition.

“Without curiosity, without the delight in figuring things out, you are doomed to make up stories,” Bourne said. “Occasionally they’ll be right, but frequently they will not be. And the whole history of science before the experimental age is essentially that. They’d make up stories, and there wouldn’t be anything to most of them. Biomedical science was confined to the four humors. You know how wonderful that was!” Hippocrates’s system based on those humors—blood, yellow and black bile, and phlegm—didn’t exactly create a solid foundation for understanding disease.

Bourne argued that if scientists don’t focus on the delight of discovery, “what you have is a whole bunch of people who are just like everybody else: they want to get ahead, put food on the table, enjoy themselves. In order to do so, they feel like they have to publish papers. And they do, because they can’t get any money if they don’t.” But papers themselves don’t move science forward if they spring from flimsy ideas.

Fixing this will also require a new attitude among deans, funding panels, journal editors, and tenure committees, who all have competing needs. Nobody is particularly happy with the current state of affairs, but the situation is unlikely to correct itself. Perhaps it is time for leading scientists, heads of scientific societies and academies, university presidents, journal editors, funding agency leaders, and policy makers to come together and work toward specific policies and practices that can begin to free scientists from the perverse and baked-in incentives of today’s scientific culture—to free them to put the delight of discovery above the ambition to get yet another grant and add yet another publication to their curriculum vitae.

Richard Harris is a science correspondent at NPR News.

From the Hill – Summer 2017

Trump Budget Proposal: Gloomy, but Just a Proposal

The eyes of the research community are focused on the Trump administration’s proposed FY 2018 budget, which was released in May, and its fate in Congress. The proposed budget includes significant cuts in R&D spending, but Congress’s final action on the FY 2017 budget, which took place in March, indicates that the current Republican-controlled Congress is not as willing as the president to reduce R&D spending. Although it was feared that the omnibus bill that was passed in March would be the first step in implementing the cuts that President Trump had promised, Congress approved a budget for the remainder of FY 2017 that would actually increase federal R&D by 5% above FY 2016 levels, according to AAAS estimates, with increases for basic and applied research, development, and R&D facilities funding.

Funding was increased for several agencies and programs—perhaps most notably the National Institutes of Health (NIH); the National Oceanic and Atmospheric Administration (NOAA) research office; Department of Defense (DOD) science and technology programs; and the Advanced Research Projects Agency-Energy. These increases are the opposite of the stated plans of the Trump administration, providing an indication that Congress might not be inclined to accept the president’s proposal for budget cuts in these areas in FY 2018.

That would be welcome news to the research community because the administration’s budget proposal includes historically large cuts in R&D spending in most government programs.

The reductions in research funding are part of an overall realignment of priorities in discretionary spending. The administration proposes reducing nondefense discretionary spending $54 billion or 10.9% below FY 2017 levels in order to boost defense spending. But what’s clear now is that the Trump administration intends this to be just the first in a decade-long series of cuts that would end with an FY 2027 nondefense discretionary budgets that is 41.9% lower than the FY 2017 total in constant dollars.

This matters because every science and technology agency and program outside the DOD and the National Nuclear Security Administration is housed in the nondefense budget. Priorities within the nondefense federal activities are relatively stable over time so that a reduction in overall spending will mean a reduction in R&D spending, even in popular programs such as NIH.

A detailed look at the proposed FY 2018 budget provides a clear picture of what to expect. According to preliminary AAAS estimates, the White House would cut total research funding by 16.8%, or $12.6 billion. Our analysis indicates that this would be the largest decline in federal R&D support in more than 40 years and that federal research spending would equal just 0.31% of gross domestic product, the lowest level in more than 40 years.

The National Institutes of Health is slated for a 21.5% reduction below omnibus funding levels, which would essentially wipe out the agency’s budget doubling that began in the George W. Bush administration. The White House would eliminate the Fogarty International Center and consolidate the Agency for Healthcare Research and Quality into NIH as a new institute. But the long history of strong bipartisan support for NIH research and the current Congress’s decision to increase NIH funding in the FY 2017 omnibus bill suggest that the administration’s proposed cuts will not be accepted.

The National Science Foundation (NSF) budget would decrease by $819 million or 11% below FY 2017. The agency estimates that this would result in 800 fewer new research grant awards than the 8,800 total in FY 2016. The proposal success rate is expected to drop from 21% in FY 2016 to 19% in the upcoming fiscal year.

The Research & Related Activities account, made up of NSF’s core research programs across multiple disciplines, would see a cut of $672 million or 11.1% below FY 2017. The six research directorates comprising this account would see roughly equal percentage reductions of around 10% each.

The Directorate for Education and Human Resources would be reduced by $119 million or 13.6%, with a particularly sharp cut to graduate research fellowships. The Experimental Program to Stimulate Competitive Research, which seeks to broaden the geographic distribution of NSF dollars, would also see a funding reduction of $60 million or 37.5% below the FY 2016 amount of $160 million. NSF’s cross-foundation investments (including Innovations at the Nexus of Food, Energy, and Water Systems; Risk and Resilience; and Understanding the Brain) would fall below FY 2016 levels.

The net result of these cuts would be to reduce the inflation-adjusted budget to its FY 2002 level, thus eliminating all the gains fueled by the 2007 America COMPETES Act, which had set a goal of doubling the NSF budget.

In spite of the overall increase in defense spending, Department of Defense science and technology programs would generally not benefit. DOD basic research would be cut by 2.1%. The Army’s research and advanced technology programs would be cut by 22.4%. On the other hand, the Defense Advanced Research Projects Agency would receive a 9.7% increase.

The proposed Department of Energy budget illustrates the administration’s general approach to science and technology: a particular skepticism of federal technology programs and hostility to climate research; a general interest in scaling back even fundamental science; and a desire to increase investment in defense-related activities.

Starting with basic research, the Office of Science budget would receive a 17.1% reduction from FY 2017 omnibus levels, returning its budget to where it was about 10 years ago. The sole program to receive an increase is Advanced Scientific Computing Research, at 11.6% above omnibus levels, largely due to a 19.9% increase for its exascale computing activities. Most research areas within Basic Energy Sciences (BES), including materials science, physics, and chemical science, appear slated for at least some reduction. The budget eliminates funding for BES’s two innovation hubs, which focus on energy storage and artificial photosynthesis, and for the Experimental Program to Stimulate Competitive Research, which directs funds to states that receive a disproportionately small share of federal research spending. BES user facilities will also see a scaling back from omnibus 2017 funding levels. For instance, BES’s five synchrotron radiation light sources would see a 12.4% reduction, and the Nanoscale Science Research Centers would see a 41.8% reduction.

Unsurprisingly, given its past focus on climate, Biological and Environmental Research (BER) would receive the largest relative reduction of any program area within the Office of Science, with its environmental research branch rebranded away from climate and newly named Earth and Environmental Systems Sciences. Biological sciences would be trimmed (including a 46.6% reduction for the Bioenergy Research Centers), and the administration proposes even sharper cuts for environmental science. That side of BER would drop from an overall budget of $314.7 million in FY 2016 to $123.6 million in FY 2018. Curiously, the administration has proposed a 26% increase to $63 million for the International Thermal Energy Reactor program, the troubled international project supported via Fusion Energy Sciences. Funding for domestic research activities would be reduced by 25.2% in total, with particular reductions for fundamental plasma research. Neither High Energy Physics nor Nuclear Physics was given much detail in the omnibus package, but both would be subject to general reductions below FY 2016 levels in multiple areas.

The Energy Department’s applied technology programs would receive even deeper cuts, reflecting the administration’s interest in limiting the scope of government’s role in science and technology and its preference for relying instead on industry to bring new technologies to fruition. Perhaps the biggest decision is the proposed elimination of the Advanced Research Projects Agency-Energy, which funds high-risk technology projects and which just received a solid funding boost from Congress. The Office of Energy Efficiency and Renewable Energy would also see severe reductions to its assorted programs, ranging from 55.4% for hydrogen and fuel cells to 82% for geothermal. The budget would zero out the office’s innovation hubs on advanced materials and desalination, the latter of which just received its first funding in the omnibus, and its manufacturing innovation institutes. The Fossil Energy R&D program would substantially scale back most activities, including carbon capture and storage pilot projects and R&D on advanced combustion systems, re-focusing exclusively on exploratory technology activities in hopes that industry will take on greater responsibility across the board. The Office of Nuclear Energy would similarly see a reduction in several activities, with its innovation hub on modeling and simulation zeroed out. R&D related to advanced reactor technology and fuel cycle sustainability, efficiency, and safety would be scaled back and shifted to earlier-stage technology. The office would, however, pursue a $10 million plan to build a new fast test reactor.

The National Nuclear Security Administration, benefiting from the proposed 10% increase in defense spending, would see a mix of increases for its research, development, test, and evaluation accounts. The primary accounts providing funding for the National Ignition, Z, and Omega facilities would see only modest changes, and activities related to exascale computing would also see increased funding.

The National Aeronautics and Space Administration (NASA) would see an overall reduction of 2.9%, a virtual windfall compared with other agencies. The space agency has enjoyed recent funding gains, with a $1.3 billion increase in FY 2016 and a smaller but substantial boost in the FY 2017 omnibus.

Within NASA’s Science Mission Directorate, the budget provides Planetary Science with a 4.5% increase, including a $150 million boost above FY 2017 omnibus funding to $425 million in total, for a planned mission to Jupiter’s moon Europa. The FY 2018 proposal would bolster the Discovery missions and continue Mars activities, though at funding levels below the omnibus. It would also cut New Frontiers by 40%. The Earth Science portfolio would decrease by 8.7%. The budget maintains support for Landsat 9 development. The Heliophysics Program would be flat-funded from FY 2017, whereas Astrophysics would see a total 8.9% increase to fund the Wide-Field Infrared Survey Telescope, among other missions. The proposal provides the full level of funding to keep the James Webb Space Telescope on schedule for a 2018 launch.

The Space Launch System and Orion Multipurpose Crew Vehicle, which both receive strong support in Congress, would be trimmed below FY 2017 levels. The budget confirms plans to cancel the Asteroid Redirect Mission, an Obama administration priority, and instead directs efforts toward developing solar-electric propulsion capabilities. The administration also offers no funding for RESTORE-L, which aims to demonstrate the servicing of a government satellite in low-Earth orbit and was funded at $130 million in the FY 2017 omnibus. NASA’s Commercial Crew Program would see a substantial funding reduction of $453 million or 38.2% below the FY 2017 level.

NASA Aeronautics would decline by 5.4% but receive continued support for the New Aviation Horizons initiative, which is carrying out a series of experimental X-Plane demonstration activities. The Small Business Innovation Research and Small Business Technology Transfer programs within the Space Technology Mission Directorate would fall below FY 2016 levels. Finally, the president’s budget proposes the termination of the Office of Education, responsible for the Space Grant consortia and other activities, requesting $37 million to wind down activities.

The Agricultural Research Service (ARS) would see a reduction of 29.2% below the FY 2017 omnibus. This includes a 15.2% or $177.9 million reduction for ARS’s primary research account, which would result in the closure of 17 laboratories and other worksites, representing nearly a fifth of all locations. Projects in all areas would see some level of reduction or elimination, with particular reductions targeted at research programs in bio-based products and biofuels (by at least 29.5%) and human nutrition (by at least 48.5%). In addition, the administration recommends rescinding all budget authority for facilities construction granted by Congress in FY 2017. ARS had originally intended to use that funding for construction at the Agricultural Research Technology Center in Salinas, California, and at the Foreign Disease-Weed Science Research Unit at Ft. Detrick, Maryland.

The National Institute of Food and Agriculture would see an 8.1% reduction. The institute would keep the largest formula-fund programs nearly flat in FY 2018, save for a $5 million or 15.0% reduction to McIntire-Stennis state forestry research. It would eliminate several smaller activities, including capacity grants at non-land grant universities; research programs on alfalfa, animal disease, and aquaculture; and multiple education programs. Sustainable agriculture grant funding would decline by at least 22.8%. The Agriculture and Food Research Initiative, the Department of Agriculture’s competitive extramural research program, would decline to $349.3 million in FY 2018, which is 6.8% below omnibus levels. The administration would allow the small Biomass R&D Initiative, a mandatory multiagency program authorized through FY 2017 in the most recent farm bill, to expire.

The Economic Research Service would see an 11.6% reduction below omnibus levels. Several work areas would see reductions, including program evaluation, analysis of drought resilience, bioenergy data modeling, and other data acquisition and access. The National Agricultural Statistics Service would receive an overall 8.4% increase above omnibus levels. Although the service would be cut 5.6% as a result of reducing the sample sizes of several survey series, funding for the Census of Agriculture would be increased 50% to $63.9 million.

The Forest Service’s Forest & Rangeland Research funding account would be reduced by 10.2%. Affected research program areas include invasive species, air quality research, clean water, and resource management. The Forest Service’s fire-related R&D activities would be reduced by a similar amount, and efforts to understand the social and economic elements of wildfire would be terminated.

According to agency and historical data, total R&D funding for the Agriculture Department in FY 2018 would drop to its lowest point since 1989 in inflation-adjusted dollars.

The National Institute of Standards and Technology’s (NIST) core research laboratories would take a substantial reduction in FY 2018, and the agency’s industrial services account would be nearly eliminated. The Scientific and Technical Research Services account, which funds NIST’s seven core research laboratories, would see a large $90 million or 13% cut. This would result in a 10% reduction in NIST’s scientific workforce. The cut would reduce funding for many program areas, including advanced materials manufacturing, semiconductor measurements, cybersecurity, and quantum science, among others. The budget would eliminate NIST’s extramural Fire Research Grants Program and the Nanomaterial Environment, Health, and Safety Program, which studies the potential environmental or health impacts of engineered nanomaterials.

Within NIST’s Industrial Technology Services account, the Hollings Manufacturing Extension Partnership would be eliminated, with $6 million requested to cover costs associated with winding down the program. The elimination would affect over 2,500 partners and approximately 9,400 client firms, according to agency budget documents. Manufacturing USA, formerly known as the National Network for Manufacturing Innovation, would receive $15 million, a $10 million reduction.

The National Oceanic and Atmospheric Administration’s total discretionary budget would decrease by $900 million or 15.9% below the 2017 omnibus level. Steep cuts would hit the National Ocean Service and climate, weather, and air chemistry research programs.

The Office of Oceanic and Atmospheric Research, the primary R&D arm of NOAA, would face a 31.9%. A 19% cut to NOAA Climate Research would reduce funding for Cooperative Institutes, universities, NOAA laboratories, and other partners. The 25.4% proposed cut to NOAA Weather and Air Chemistry Research would terminate the Air Resources Laboratory, which studies air pollution and climate variability, and the Unmanned Aircraft Systems Program Office. The budget would also eliminate the Joint Technology Transfer Initiative, recently established to quickly transition the latest research into weather forecasting products, and Vortex-Southeast, an effort to better understand tornado formation in the US Southeast. Funding for the Ocean, Coastal, and Great Lakes Research Program would be cut by nearly half, with a proposed elimination of the National Sea Grant College Program.

For the National Ocean Service, the budget proposes cutting $23 million in federal support to states for the management of the National Estuarine Research Reserve System, a network of 29 coastal sites designated to protect and study estuarine systems. Within the National Environmental Satellite, Data, and Information Service, the Geostationary Operational Environmental Satellite-R Series and the Joint Polar Satellite System would see funding reductions in line with planned launch preparation activities, and the Polar Follow On, currently funded at $369 million, would be cut in half and plans initiated to seek cost efficiencies and leverage partnerships. The National Weather Service’s Science and Technology Integration Office would see several program terminations. NOAA’s ship fleet recapitalization efforts would be flat-funded at $75 million, which would support construction of a second NOAA Class A vessel for oceanographic research.

The Environmental Protection Agency’s science and technology account would be cut by $263 million, which is 36.8% below the FY 2017 omnibus level. The US Geological Survey would be subject to an overall 15% cut from its $1.1 billion level.

The Census Bureau would receive a $51 million or 4.3% increase for periodic censuses and programs. This comes amid rising cost concerns in preparing for the 2020 Decennial Census. The bureau also conducts a range of monthly, quarterly, and annual surveys, including the American Community Survey, a source of detailed community-level information about employment, educational attainment, and other topics.