The Second Coming of UK Industrial Strategy
The United Kingdom dismantled industrial policies in the 1980s; today it must rebuild them to create a social-industrial complex.
Industrial strategy, as a strand of economic management, was killed forever by the turn to market liberalism in the 1980s. At least, that’s how it seemed in the United Kingdom, where the government of Margaret Thatcher regarded industrial strategy as a central part of the failed post-war consensus that its mission was to overturn. The rhetoric was about uncompetitive industries producing poor-quality products, kept afloat by oceans of taxpayers’ cash. The British automobile industry was the leading exhibit, not at all implausibly, for those of us who remember those dreadful vehicles, perhaps most notoriously exemplified by the Austin Allegro.
Meanwhile, such things as the Anglo-French supersonic passenger aircraft Concorde and the Advanced Gas-cooled Reactor program (the flagship of the state-controlled and -owned civil nuclear industry) were subjected to serious academic critique and deemed technical successes but economic disasters. They exemplified, it was argued, the outcomes of technical overreach in the absence of market discipline. With these grim examples in mind, over the next three decades the British state consciously withdrew from direct sponsorship of technological innovation.
In this new consensus, which coincided with a rapid shift in the shape of the British economy away from manufacturing and toward services, technological innovation was to be left to the market. The role of the state was to support “basic science,” carried out largely in academic contexts. Rather than an industrial strategy, there was a science policy. This focused on the supply side—given a strong academic research base, a supply of trained people, and some support for technology transfer, good science, it was thought, would translate automatically into economic growth and prosperity.
And yet today, the term industrial strategy has once again become speakable. The current Conservative government has published a white paper—a major policy statement—on industrial strategy, and the opposition Labour Party presses it to go further and faster.
This new mood has been a while developing. It began with the 2007-8 financial crisis. The economic recovery following that crisis has been the slowest in a century; a decade on, with historically low productivity growth, stagnant wage growth, and no change to profound regional economic inequalities, coupled with souring politics and the dislocation of the United Kingdom’s withdrawal from the European Union, many people now sense that the UK economic model is broken.
Given this picture, several questions are worth asking. How did we get here? How have views about industrial strategy and science and innovation policy changed, and to what effect? Going forward, what might a modern UK industrial strategy look like? And what might other industrialized nations experiencing similar political and economic challenges learn from these experiences?
Changing views about industrial strategy and science policy have accompanied wider changes in political economy. The United Kingdom in 1979 was one of the most research-intensive economies in the world. A very significant industrial research and development (R&D) base, driven by conglomerates such as BAC (in aerospace), ICI (in chemicals and pharmaceuticals), and GEC (in electronics and electrical engineering), was accompanied by a major government commitment to strategic science.
In common with other developed nations at the time, the UK’s extensive infrastructure of state-run research establishments developed new defense technologies, as part of what the historian David Edgerton called the “warfare state.” Civil strategic science was not neglected either; nationalized industries such as the Central Electricity Generating Board and the General Post Office (later to become British Telecommunications) ran their own laboratories and research establishments in areas such as telecommunications and energy. The Atomic Energy Authority carried out both military and civil nuclear research.
This situation was the product of a particular consensus established following the Second World War. From the left wing of the science-and-technology establishment there was a pre-war enthusiasm for central planning most coherently and vocally expressed by the Marxist crystallographer J. D. Bernal. From the right, there were the military engineers and capitalist chemists who built the Cold War state. From the left side of politics, there was the 1962 government of Harold Wilson proclaiming “the White Heat of Technology” as the mechanism by which the United Kingdom would be modernized. From the right, there was the determination, in the face of the UK’s relative geopolitical decline and economic difficulties, to remain a front-rank military power, with the accompanying decision to develop and maintain an independent nuclear weapons capability.
The ideological basis for an attack on this consensus was developing in the 1950s and 1960s. The leading figure here was Friedrich Hayek, an Austrian-British economist and philosopher and author of forceful critiques of the notion of central planning in general. His friend and intellectual ally, the chemist Michael Polanyi, adapted this argument specifically to oppose the case for planning and direction in science. Polanyi insisted on a strict division between pure science and applied science, introducing the idea of an independent “republic of science” that should remain free of any external direction. This idea was, and remains, very attractive to the world of elite academic science, though it is debatable whether this powerful myth ever described an actual, or indeed a desirable, situation.
Margaret Thatcher was the critical individual through whom these ideas became translated into policy. The influence of Hayek on Thatcher’s general thinking about economics and policy is well known. But Thatcher was also a scientist, whose practical experience was in the commercial world, as an industrial chemist. In a 2017 article in Notes and Records, the Royal Society journal of the history of science, the historian Jon Agar traced the influence of Thatcher’s own experience as a scientist on the evolution of science and innovation policy in her governments. In short, nothing in her experience, or in the experience of those who advised her, would persuade her that there was any special status for science that should exclude it from the market mechanisms to which she believed the whole economy should be subject.
Since the market turn, a key feature of science policy initiated by the Thatcher governments has been the decline of state-sponsored strategic science. By strategic science, I mean science that directly supports what the state regards as strategically important. The outstanding category here is of course the science directly motivated by defense needs. However, strategic science also includes science that supports the infrastructure of the market, for standards and regulation. It could also include science that supports environmental protection, communications infrastructure, medical advance, and the supply of energy.
The obvious point here is that the boundaries of what the state defines as strategic may change with time. Given that the Thatcher government had an explicit goal of shrinking the state, it is unsurprising that the state withdrew support for R&D in areas formerly thought of as strategic. The program of privatization took industries such as steel and telecommunications out of state control and left decisions about the appropriate degree of support for R&D to the market.
This had the largest effect in the area of energy. The privatized energy companies aimed to maximize returns from the assets they inherited, and levels of R&D fell dramatically. What had been a large-scale civil nuclear program was wound down. Even in the core area of defense, there was significant retrenchment, given extra impetus by the end of the Cold War. All but the most sensitive R&D capacity was privatized, most notably in the company Qinetiq. As the historian Agar has emphasized, none of this was an accident, but should all be considered part of a conscious policy of withdrawing state support from any near-market science.
The withdrawal of the UK state from much strategic R&D provided a test of the notion favored by some free market ideologues that state spending on R&D crowds out private-sector spending. In fact the reverse happened; the intensity of private-sector R&D investment fell in parallel with that of the state’s. The relationship between the two may not be straightforward, however, as the market turn in UK politics led to significant changes in the way companies were run. A new focus on maximizing shareholder value and an enthusiasm for merger and acquisition activity in the corporate sector resulted in the loss of industrial research capacity.
The fate of the chemicals conglomerate ICI provides a salutary example. A hostile takeover bid from the corporate raider James Hanson in 1991 prompted ICI to “demerge” by separating its bulk chemicals and plastics business from its pharmaceuticals and agrochemicals businesses. The company housing pharmaceutical and agrochemical operations—Zeneca—underwent further divestments and mergers to produce the pharmaceutical company AstraZeneca and the agrochemical company Syngenta. The residual rump of ICI, attempting to pivot toward higher-value specialty chemicals, made an ill-timed, debt-financed purchase of National Starch. A series of divestments failed to lift the debt burden, and what was left of the company was sold to the Dutch company Akzo-Nobel in 2007.
The story of the electronics and electrical engineering conglomerate GEC offers some parallels to the ICI story. In the 1990s, GEC sold its less exciting businesses in electrical engineering and electronics in order to make acquisitions in the booming telecom sector. Renamed Marconi, the company had to restructure after the bursting of the dot-com bubble, and finally collapsed in 2005.
These corporate misadventures resulted in a loss of a significant amount of the UK’s private-sector R&D capacity across a wide range of areas of technology. The common factor was a belief that the route to corporate success was through corporate reorganization, mergers, acquisitions, and divestments rather than through researching and developing innovative new products. There are parallels here with the decline of long-term, strategic R&D in some big corporations in the United States, such as General Electric, AT&T Bell Laboratories, Xerox, Kodak, and IBM, though in the United Kingdom the loss of capacity was significantly greater and took place with no compensating new entrants at the scale, for example, of the US company Google.
It is also possible to interpret these stories as highlighting different beliefs about information and the power of markets. In the old industrial conglomerates such as ICI and GEC, long-term investments in R&D were made by managers and paid for by the retained profits of the existing businesses (which for companies such as GEC were substantially boosted by government defense contracts). A newer view emphasizes the role of the market as a more effective device for processing information; in this view, money locked up in the conglomerates would have been better returned to shareholders, who would have invested it in innovative, new companies.
There are arguments on both sides here. On one hand, questions can clearly be asked about the motivations and effectiveness of the managers of the conglomerates. They may seek to protect the incumbent position of existing technologies, they may be too reluctant to adopt new technologies developed outside their organization, and they may be inhibited by the scale and bureaucracy of their companies. On the other hand, one result of the turn to the markets has been a sequence of investment bubbles resulting in substantial misallocation of capital, together with a pervasive short-termism. Whatever the mechanisms at work, the outcome is not in doubt: a significant loss of private-sector R&D capacity in the United Kingdom since the Thatcher era.
The obverse of the ideological determination of Thatcher and her advisers to withdraw support from near-market research was a new valorization of “curiosity-driven” science. The result was a new, rather long-lasting consensus about the role and purpose of state-supported science that emphasized economic growth as its primary goal. But its tacit assumption was that innovation could be driven entirely from the supply side. In this view, the best way to make sure that state-supported science could contribute to a strong economy was by creating a strong underpinning of basic research, developing a supply of skilled people, and removing the frictions believed to inhibit knowledge transfer from the science base to the users of research.
The supply-side view of science policy was first clearly articulated in 1993, in a white paper introduced by the Conservative science minister William Waldegrave. This influential document halted a pattern of decline in research funding in the academic sector, using the classical market failure justification to call for the state to fund basic research. It reasserted the role of the private sector as the key funder of applied research, and with a continued program of privatization of government research establishments ensured a further withdrawal of the government from strategic research.
The advent of a Labour government in 1997 did not change matters. In line with the general acceptance of the post-Thatcher settlement, there was considerable policy continuity. A major policy statement in 2004, under the sponsorship of an influential and long-serving science minister, Lord Sainsbury, restated the principles of supply-side science policy.
The Sainsbury approach included new elements that reflected the changing corporate R&D landscape: more emphasis on spin-out companies based on protectable intellectual property and funded by venture capitalists, and on the aspiration to attract overseas investment. A sense that there was now too little private-sector research underpinned an explicit target for increasing business R&D over the next 10 years, to 1.7% of gross domestic product (a target that was conspicuously missed, as the figure currently stands at 1.1%).
The main practical effect of the 10-year investment framework was a series of real-term increases in spending on academic research. This was accompanied by a further run-down of strategic research, with R&D spending by government departments continuing to decrease.
Meanwhile, policy-makers displayed a growing sense that the academic research base, now benefitting from a more generous funding settlement, should be pressed harder to make sure it delivered economic growth. This expectation manifested itself in a heightened rhetoric about “impact,” with various bureaucratic measures to incentivize and reward activities that produced such economic effects, whether through the formation of spin-out companies or through collaboration with established businesses. These measures culminated in the 2014 Research Excellence Framework, which included impact as a criterion to be assessed in university research, and whose results directly determine university research funding.
The emphasis on impact produced the paradoxical effect that even as the overall balance in the UK’s research system in fact shifted from strategic research toward undirected research, many people in the academic part of the system felt that they were being pressured to make their own research more applied.
The industrial policy of the Conservative governments between 1979 and 1997 was to not have an industrial policy. The New Labour government of 1997 broadly accepted this consensus, in particular resisting so-called vertical industrial policy—that is, specific measures in support of particular industrial sectors.
Yet absolute opposition to industrial policy was at times also honored in the breach. The government’s policy of partial devolution to Scottish and Welsh assemblies gave an economic development function to these administrations and to agencies in the English regions. In 2007 an innovation agency—the Technology Strategy Board—was given free-standing status, empowered to award collaborative R&D grants to industry and to oversee some cross-sector networking activities, mostly between industrial partners.
But it took the global financial crisis of 2007-8 to bring about a change in mood. A new, powerful business minister in Gordon Brown’s Labour government, Peter Mandelson, emphasized the need to rebalance the economy away from the financial sector and toward manufacturing. The automobile sector was singled out for a series of interventions. Most strikingly, plans called for the government to form a new class of translational research centers, modeled on the successful and much-envied centers developed by the Fraunhofer Society, a major German research organization.
In 2010, the new Conservative-Liberal Democrat coalition government accepted the research center plan, continued the support for the automobile sector, and began to speak of industrial policy again. In practice, policy consisted of a mixture of sector-based support and the championing of selected technology areas, and it could be argued that many of the interventions were inadequate in scale. But perhaps the most important significance of this development was that after 30 years in which the very words industrial strategy were essentially unspeakable in the British state, there was now an acceptance, even in polite political circles, that support for industry was a proper role for government.
What does the innovation landscape in the United Kingdom now look like, after the dramatic shifts of the past three decades? The overall R&D intensity of the UK economy, which 30 years ago was among the highest in the world, is now low compared not only with traditional competitor economies, such as France, Germany, and the United States, but with the fast-growing economies of the far East, such as Korea and China.
Within the United Kingdom’s R&D enterprise, there is an academic science base that is very high performing when measured by academic metrics such as citations. But there are some notable problems on the industrial side. Uniquely for a developed economy of the UK’s size, more than half of industrial R&D is conducted by foreign-owned companies. This industrial R&D is concentrated in a few sectors, dominated by the pharmaceutical industry, with other major contributions in aerospace and computing. The biggest change in recent years has been seen in automobiles, where industrial R&D more than doubled since 2010, perhaps reflecting its status as the test-bed of the new wave of industrial strategy.
State-supported translational research is, with a very few exceptions, weak. The new Fraunhofer-inspired “Catapult Centres,” established post-2010, are finding their feet. Two of the most successful centers were built around preexisting initiatives, and they are worth considering in more detail as demonstrations of how new translational research capacity can be created. These are the Warwick Manufacturing Group (WMG) at the University of Warwick and the Advanced Manufacturing Research Centre (AMRC) at the University of Sheffield. Both are the creations of individual, highly entrepreneurial academics (Lord Kumar Bhattacharyya at WMG and Keith Ridgway at AMRC), and both began with a strong sector focus (automotive at WMG and aerospace at AMRC).
Although both institutions have grown out of conventional research universities and remain associated with them, their success arises from a mode of operation very different from university-based science, even in applied and technical subjects. AMRC began as a collaboration with the aircraft manufacturer Boeing, soon joined by the aero-engine manufacturer Rolls-Royce. Much of the research is focused on process optimization, and it is carried out at industrial scale so that new processes can rapidly be transferred into manufacturing production.
A key feature of such translational research centers is the way that the large companies that form their core partners—Boeing and Rolls-Royce in the case of AMRC, and Jaguar Land Rover for WMG—can bring in smaller companies that are part of, or aspire to be part of, their supply chains, involving them in joint research projects. Another way in which these translational research centers extend the mission of the traditional research university is through a greater involvement in skills development at all levels, including the technical skills typical of an engineering apprenticeship program. One measure of the success of the institutions is the degree to which they have been able to attract new investment in high-value manufacturing into what since the 1980s had been underperforming regions that had failed to adapt to successive waves of deindustrialization
Meanwhile, economists and policy-makers in the United Kingdom and the United States are increasingly recognizing that the effects of deindustrialization on regional economies have in the past been underestimated. For example, in a 2009 article in Harvard Business Review, Gary Pisano and Willy Shih, both professors of business administration, drew attention to the way in which manufacturing anchors what they called a “manufacturing commons,” the collective resources and knowledge that underpin a successful regional cluster.
These commons are based on the collective knowledge, much of it tacit, that drives innovations in both products and processes. A successful manufacturing commons is rooted in R&D facilities, networks of supplying companies, informal knowledge networks, and formal institutions for training and skills. Pisano and Shih’s key point is that the loss of a manufacturing plant, perhaps through outsourcing, can have a much greater impact than the direct economic impact of the loss of the plant’s jobs, by eroding this larger manufacturing commons.
But stories such as those of the Sheffield Advanced Manufacturing Research Centre suggest that manufacturing commons can be rebuilt. The emerging formula brings together several elements. Research facilities need to have an avowedly translational focus, and they should create strong research partnerships between or among academia, large companies already operating at the technological frontier, and smaller companies wishing to improve their innovation practices, possibly to make them more competitive as suppliers to the large companies. Education institutions need to focus on building skills at all levels. They should be linked with these research centers, creating clear pathways for individuals to progress from intermediate-level technical skills to the highest-level qualifications in technology and management. As these research facilities become successful and recognized, this should lead to a virtuous circle in which further inward investment is attracted and the existing business base grows in capability.
The past decade has seen a new consensus about industrial strategy emerge in the United Kingdom, to this extent at least: the Conservative government has a department with industrial strategy in its title (the Department of Business, Energy and Industrial Strategy) and has published a major policy document on the subject, and the opposition Labour Party advocates an industrial strategy as a major plank of its alternative economic policy.
To what extent is a consensus emerging on the substance of what an industrial strategy looks like? One attempt to articulate a new consensus has recently been made by the Industrial Strategy Commission, an independent initiative supported by the Universities of Sheffield and Manchester, of which I was a member.
In the commission’s view, the beginning of a strategy needs to recognize some of the real weaknesses of the UK economy now. One key issue that has become particularly pressing since the global financial crisis is the very low rate of domestic productivity growth. There is a global context here, in that productivity growth throughout the developed countries has been slowing since the 1980s. But the situation in the United Kingdom is particularly bad: levels of productivity were already significantly below those achieved in the United States, France, and Germany, and the slowdown since the global financial crisis has been dramatic.
The United Kingdom also has gross geographic disparities in economic performance, with an economy dominated by a single city, London. The UK’s second-tier cities underperform, there are many very poor urban areas that have not recovered from 1980s deindustrialization (analogous to the US Rust Belt), and many places in the rural and coastal peripheries have been left behind by economic success elsewhere.
As the commission sees it, an industrial strategy should be framed with a view of the whole economy, not just a few high-technology sectors. It needs to recognize the importance of the state as an actor uniquely able to coordinate activities and create new markets. And if it is to have a long life, the strategy needs to be linked to the broader long-term strategic goals of the state.
One positive aspect of the 1980s turn to free market liberalism has been an increased recognition of the importance of competition in driving innovation. But the wave of privatization that occurred has produced a set of industries (in transport, energy, and water, for example) that are heavily regulated by the state, but whose structure and incentives do not seem to reward new investment or innovation. This needs to be rethought.
The United Kingdom has underinvested in infrastructure for many years. For traditional hard infrastructure—roads and railways—the investment criteria used to assess new investments have rewarded parts of the country where the economy is already strong, and this must change. Of equal importance, investment needs to include the infrastructures underlying newer parts of the economy, such as mobile telephony and fast broadband internet coverage. Nor should the soft infrastructure that makes successful industrial societies function be neglected—in education and health, for example. The commission’s headline recommendation here is for a Universal Basic Infrastructure guarantee to ensure that all parts of the country have in place the infrastructure needed to make economic success possible.
Policy-makers across the political spectrum now seem to realize that the R&D intensity of the UK economy needs to increase. But this needs to be done in a way that considers the whole landscape: public and private-sector, undirected, use-inspired, translational, and strategic. More emphasis is required on the translational part of the picture than we’ve seen before, and the links to skills at all levels need to be made more coherent. Currently the geographical distribution of R&D, in public and private sectors alike, is highly imbalanced, with the biggest investments being made in the most prosperous parts of the country: London and the South-East. This too needs to change; if new R&D institutions are to be set up, the role they can have in catalyzing regional economic growth needs to be explicitly considered when decisions are made on their location.
Above all, the United Kingdom needs to move beyond the supply-side science policy that has dominated innovation thinking for the past three decades. More attention needs to be paid to generating demand for innovation. Here the government can have a central role, buy using its spending power much more purposefully to encourage innovation in the private sector, especially when linked to the strategic goals of the state. In the UK’s case, these include a long-term commitment to reducing the carbon intensity of the energy economy while maintaining the security and affordability of energy to domestic consumers and industry. The United Kingdom also maintains a wide, cross-party consensus in support of universal health care coverage. These goals are unlikely to be deliverable without substantial innovation. Done right, industrial strategy should enable the state to meet its strategic goals while at the same time providing the new business opportunities for the private sector.
In the post-war years, the United Kingdom, like other developed countries, had a warfare state, which did successfully drive innovation. The innovation system associated with the warfare state was dismantled, and what has arisen in its place has not been sufficient to drive economic growth or to meet the long-term challenges UK society faces. This, too, seems to be a difficulty shared by the United States and other industrialized nations.
We should not be nostalgic for the Cold War, but the United Kingdom does now need to rebuild an innovation system appropriate for its current challenges. Rather than attempting to re-create the military-industrial complex of the past, we should aspire to a social-industrial complex that can drive the innovation that is needed to create a sustainable, effective, and humane health and social care system and to place the energy economy on a sustainable, low-carbon footing.