Forum – Winter 2017
Needed: better labor market data
In “Better Jobs Information Benefits Everyone” (Issues, Fall 2016), Andrew Reamer ably describes recent progress in building longitudinal market data systems. But as he notes, there is still work to be done to create a nationwide, not simply a state-based, system. Some states are still in the early stages of connecting data on college programs with wage records, and most states have not yet begun to tap into this data to improve the college-to career transition.
In addition, most of these systems have a major deficiency: they lack data on the types of jobs that people hold. As a result, educators, employers, and policy makers have a tough time trying to pinpoint where individual postsecondary programs need to be expanded or reduced in line with employer demand. Without this information, it is also nearly impossible to trace the career trajectories that graduates take as they move from job to job. The solution is well-known: the federal government and states could simply add occupational identifiers and other detailed information to Unemployment Insurance wage data collected from employers.
Reamer recognizes that an information capability that connects individual postsecondary programs and careers has become essential for understanding middle-skill job opportunities. With middle-skill jobs, as with bachelor’s degree-level jobs, what you earn is closely related to what you study. These jobs tend to be closely connected to local labor markets and characterized by demand not only for traditional degrees, but also for a range of certificates, certifications, and licenses. What is sorely needed is a major effort to map such jobs—including all the programs, pathways, and credentials that lead to good jobs that pay without a bachelor’s degree.
Few people will disagree when Reamer says, “Remarkable opportunities are available to enhance the workings of US labor markets through modest investments to improve workforce data resources.” As he suggests, building out the national information infrastructure is a high-leverage opportunity with relatively low costs. If policy makers want to get serious about the future of the middle class, then mapping the connections between education and economic opportunity will naturally be Job No. 1.
Andrew Reamer rightly calls for improving the collection and use of data for advancing decisions about human capital. He methodically and thoroughly recounts the various sources of workforce information that, as he points out, are numerous, often scattered, and sometimes redundant.
Although Reamer briefly mentions various stakeholders who would benefit from improving the connectedness and usability of this information, these issues should matter to everyone. What are people supposed to do after a mass layoff when they believe that they do not have the skills required for the jobs remaining in their hometown? How can employers more efficiently find the workers they need to grow their businesses? How can policy makers know whether student financial aid investments are helping to move students into successful careers?
Among Reamer’s list of recommendations for federal and state data improvements, he calls for adding occupation information to state Unemployment Insurance wage records, along with incorporating shorter-term nondegree credentials along with traditional degrees into statewide longitudinal data systems that can show how people go through all of the stages of education and training, and into the workforce.
When matched with data on education and training, occupation information on wage records would help us to see whether people are finding jobs in their fields of study and indicate the success of workforce training programs. Moreover, requiring the addition of occupation information on wage records would reveal what is happening on a far greater scale than methods that currently rely on surveys, as Unemployment Insurance wage records cover about 80% of the civilian workforce. Although industry codes are currently required on wage records, this information is insufficient. Industry codes cannot indicate, for example, whether people who received training in data processing and now work in the retail industry are supporting the computer systems that serve their employers, or instead are working as sales clerks at company stores.
Fully supporting statewide longitudinal data systems that securely match this information would empower students and workers to see how much they might be able to earn given their occupational goals. Moreover, if ongoing efforts to catalogue skills across occupations are successful, we would be able to delineate which skills are most valuable across occupations, even as the labor market changes more frequently. Educators could make their coursework more market-relevant. Employers would have a better sense of the skills of the available workforce, and thus could make more informed decisions about where they might want to set up or expand shop.
With all of this great potential, the changes Reamer calls for would be well worth it. Although it will require more investment upfront to modernize federal and state data systems, all education and workforce stakeholders would ultimately benefit by having powerful information to make better decisions about investments of time and money.
What are middle skills?
In “The Importance of Middle-Skill Jobs” (Issues, Fall 2016), my colleague Alicia Sasser Modestino provides a good review of labor market trends. Her focus on middle-skill jobs is especially important given persistent and widespread concerns regarding prospects for the middle class in the United States. A number of points are worth considering further.
The concepts of middle-skill jobs and middle-class jobs have no official or standard definitions and the ways the two concepts are used often refer to somewhat different groups of jobs. Also, the education levels of the workers are often used to define the skill levels of the jobs they hold, but it would be better to define the skill requirements of job tasks independently of worker credentials. Although it is likely that most middle-skill workers are matched to middle-skill jobs, and vice-versa, defining job requirements based on worker characteristics makes it more difficult to investigate whether there is any mismatch between workers and jobs.
Most definitions of middle-skill jobs include those discussed in the article, such as skilled trades, higher-level clerical and administrative support occupations, technical jobs, some sales jobs (e.g., insurance agent, wholesale sales representative), and a diverse group of associate professional and similar jobs, such as teacher, social worker, nurse, paralegal, police detective, and air traffic controller, among many others. Although most of these jobs are likely to support a middle-class lifestyle and personal identity, the degree to which this is the case will depend on whether one’s definition of middle class emphasizes earnings, job education requirements, or other job characteristics, such as freedom from close supervision, as well as the type of household to which an individual belongs (e.g., single individual, two-earner couple, single parent). Likewise, there are jobs that are generally considered less-skilled whose pay may be within the range considered middle class, such as long-haul truck driver. Such jobs were even more common prior to the decline of manufacturing production jobs and unionization rates that began in the late 1970s, a fact that attracted renewed attention recently in political discussions. All of which is to say that there are strong relationships between workers’ education and training, job skill requirements, job rewards (both material and nonmaterial), and social class, but that these concepts are not identical and their relationships are not one-to-one.
The author makes a significant point regarding the future of middle-skill jobs, most of which are presumably middle class. A large literature in labor economics argues that computer technology and automation are eliminating such jobs, driving inequality growth. However, the article indicates that the share of all jobs that are middle-skill has not changed recently, although a greater share of such jobs may require some college. These trends are important to monitor.
It is also important to understand that occupational change in the United States and other advanced economies has been more gradual than often recognized and has not accelerated appreciably in recent years, despite widespread belief that the diffusion of information technology is radically altering the nature of work. Moreover, official projections suggest continued gradual change in the occupational structure in the next 10 years. In addition, retirements and ordinary turnover will create vacancies for new job seekers even within occupations that will decline as a proportion of workforce. The US Bureau of Labor Statistics projects that between 2014 and 2024 there will be fewer than 10 million net new jobs created, but more than 35 million openings because of such replacement needs.
Finally, research shows there is a persistent tendency among observers to confuse cyclical weaknesses in overall demand with structural changes in the labor market. Concerns regarding technological unemployment spiked during the Great Depression and post-war recessions, but dissipated after economic growth rebounded and unemployment fell to normal levels. The extent of technological unemployment tends to be overestimated while the role of aggregate demand insufficiency is underestimated. The United States and other countries do not need to look to the future for a possible jobs crisis; they have experienced a jobs crisis since the beginning of the financial crisis in 2008. Raising education levels among young cohorts is necessary to keep up with technological change that is steadily but gradually altering the structure of employment. However, more effective macroeconomic policies can have a quicker and broader impact on the job prospects of middle- and less-skilled workers, as the strong growth of the late 1990s demonstrated.
New toxic chemical regulations
Two informative articles in the Fall 2016 Issues—“Not ‘Til the Fat Lady Sings: TSCA’s Next Act,” by David Goldston, and “A Second Act for Chemicals Regulation,” by Keith B. Belton and James W. Conrad Jr.—are minimally to moderately encouraging about the human health ramifications of the recent overhaul of the Toxic Substances Control Act (TSCA). However, the argument in both reviews would be strengthened, as would the amended TSCA, by putting public health concepts at the forefront.
The original TSCA was a mixture of two types of preventive approaches. Primary prevention, which results in the chemical never being produced, occurred through the law’s requirement of pre-manufacturing approval, based primarily on reviews of the chemical structure by experts at the US Environmental Protection Agency (EPA) who were knowledgeable about basic toxicological science. The EPA could ask for toxicological or other data if there were concerns of potential mutagenicity or other adverse consequences. Common to all primary health prevention modalities, we cannot directly measure the effectiveness of this approach as we do not know how many chemicals would have produced adverse health effects had industry not weeded them out by routinely using existing tests for such endpoints as reproductive and developmental toxicity. Note that there are many billions of potential chemical compounds, and it is estimated that the industry does toxicity testing on perhaps seven compounds for every one that is eventually manufactured.
The secondary preventive aspects of the original TSCA, related to chemicals that were already in commerce, were far weaker for many reasons, including all of the difficulties in removing a product once it is in circulation. It required the use of risk assessment, a valuable technique for secondary rather than for primary prevention.
Public health theory and practice gives primary prevention far higher priority than secondary prevention. Yet the recent focus on amending TSCA has been on chemicals in commerce. Although of great importance, particularly with the limitations of the original TSCA, the risk of harm due to the inappropriate release of a new chemical is potentially far greater. Even assuming 99% effectiveness of existing toxicology testing aimed at avoiding a chemical with adverse reproductive and developmental effects, one of every hundred chemicals will not be adequately tested—and I personally doubt that current tests are 99% effective. Yet the new TSCA, like the European Union’s REACH regulatory program, though highly dependent on toxicology testing, does not focus sufficient resources on improving the effectiveness of testing techniques. Further, by requiring epidemiological evaluation of possible cancer clusters and an unnecessary focus on reduction in animal testing, it inherently reduces the priority that should be placed on primary prevention. Although epidemiology is important, a causal linkage between a chemical and cancer found in an epidemiological study in essence represents a failure of predictive toxicology. Let’s avoid such failures through better toxicological science.
Both articles point out that defining many of the central terms in the amended TSCA will require years of regulatory decision making and court battles. Unfortunately, neither the EPA’s leaders nor those adjudicating competing interpretations will be guided by a clear statement in the new TSCA of the relative importance of primary prevention in guiding the EPA for perhaps another 40 years.
To read the article by David Goldston and the one co-authored by Keith B. Belton and James W. Conrad Jr., one might be inclined to think that there were two different laws recently passed attempting to bolster chemical safety and regulation. That in and of itself might be the primary indicator for the future success or not of the Frank R. Lautenberg Chemical Safety for the 21st Century Act, otherwise known as the long-awaited update to the Toxic Substances Control Act (TSCA).
If history can tell us anything about the present (and it most assuredly can), then the authors are more than justified if they seem a bit worried about the future of chemicals management under the guidance of a reformed TSCA. Implementing a bill as multidimensional as TSCA proved to be a Herculean (maybe Sisyphean?) task in the first go-around. Will this second attempt, 40 years later, prove any easier?
Goldston offers some insider perspective on the evolution of this most recent iteration of the law and seems concerned that some of the flaws nagging implementation of TSCA over the past several decades may now be baked into this new version as well. He points, in particular, to issues such as preemption (i.e., federal law preempts attempts by states to impose stricter laws of their own) as signs of where the language appears strong and severe but is also ambiguous, which may prove an early indicator of where fights are most likely to crop up in the coming years. And although data sharing and confidential business information issues appear to have gotten a useful (if not perfect) upgrade, Goldston points out that provisions for applying TSCA and other forms of chemical controls over imports actually got weaker in the new law.
While Goldston seems most concerned about process and procedure in some of the murkier areas of the new TSCA, Belton and Conrad point to concerns with the uptake of new scientific models, methods, and practices. They point out that today’s toxicology has risk-assessment tools that were previously unavailable, and that there is a need to keep pace in the regulatory realm. But, they argue, these tools and techniques “are far from battle-tested”—that is, they aren’t quite up to the legal fight that will inevitably fall on their shoulders when they are used.
Though the procedural elements that Goldston raises are surely worrisome, in a sense they are a part of the standard implementation process and therefore anticipated sites for continued work. The scientific issues raised by Belton and Conrad, however, present one of the unique challenges of implementing a law such as TSCA. Over the past 40 years, the scientific infrastructure underpinning environmental and occupational health and safety, and more generally toxicology and human health, have evolved tremendously. The questions we ask, the ways in which we measure health, and our understandings of vulnerability and vulnerable populations have all changed dramatically. Endpoints, disease etiology, epigenetics and endocrine disruption, and the tools available to measure and identify chemicals at previously unmeasurable concentrations have all changed—and transformed how we think about and what we expect when we talk about safety and risk. How can we build resilient, adaptive regulatory systems that don’t take 40 years to be updated?
One key piece to building this sort of learning regulatory system is to ensure that the law is not abandoned during the course of implementation. In the first go-around 40 years ago, TSCA was orphaned shortly after birth. Changes in the oversight committees of TSCA, along with natural electoral changes, left TSCA abandoned in Congress. The nascent environmental community lacked the dedicated expertise and resources it needed to follow TSCA over the long haul. And since TSCA had emerged without a broad public foundation and the intricacies of the law were largely invisible outside of government operations, there was no public to hold anyone accountable. Even though the flaws of the original TSCA were many, its orphan status during its early, difficult years may have been the weakest aspect of the law. This time around, successful implementation will require participation and vigilance from a diverse group of stakeholders—the same group that helped to make this revision finally possible.
Rethinking biosecurity
In “Biosecurity Governance for the Real World” (Issues, Fall 2016), Sam Weiss Evans has done an excellent job of laying out the reasons why current biosecurity rules are ill-suited to provide the protection we seek against the misuse of biological knowledge. By framing the problem as one of controlling access to a limited number of “select agents” and monitoring only the life sciences research conducted with government funding, the current regime cannot help but be partial in coverage and almost certainly ineffective against a range of potential threats.
As usual, however, it is easier to see the faults in an existing set of institutions and rules than to devise a more workable remedy. Evans suggests that the biology community needs a group of indigenous professionals, similar to the “white hats” that have emerged in the field of computer/network security research that would police ongoing life science research in areas of concern. But just as the National Research Council concluded in what has come be called the Fink Committee report that the “gates, guards, guns” model employed in the area of nuclear weapons research was inappropriate for the diffuse and largely civilian biological sciences research community, it is questionable whether a concept that fits computer science would work for biology. Software is created in a form that is easily shared online; biological research, as Evans points out, is produced in laboratories, each with its own form of tacit knowledge and organizational culture.
Will the pharmaceutical companies, whose research is currently not covered by the federal government’s Dual Use Research of Concern rules except on a voluntary basis, be willing to fund professional biologists to monitor their research projects the way a company such as Microsoft might hire a “white hat” to debug its computer code? Could an insider designated to monitor research in a life science laboratory be expected to blow the whistle on the research of the lab leader? Is there a community of amateur biologists with the required professional expertise analogous to the amateur hackers who search through code for fun, and if so, how would they gain access to the biological research at an early stage, when control is still possible? The DIY biologists and BioArt communities might seem to fit the bill, except that their numbers are few compared with the hundreds of thousands of people with advanced degrees in the life sciences in the United States alone, and their access to established laboratories is almost nonexistent.
In short, the design of a new regime raises many problems that need to be analyzed using the approach that Evans champions in his critique of the current controls: it should respect the specific context of the biological sciences, including the diversity of settings in which research takes place and their national and international links to other laboratories. In the long run, the most effective response to biosecurity concerns is likely to lie in the slow process of increasing awareness of the issues in the life sciences community, a job that is likely to take more than one generation to accomplish.
Recent advances in biotechnology, such as gene editing, gene drives, and synthetic biology, challenge the criteria and procedures put in place for identifying and regulating what the federal government considers to be Dual Use Research of Concern in the biosciences. It is increasingly difficult to flag experiments for additional scrutiny or limits on publication in the name of biosecurity. For example, with advances in gene editing, we can transform benign organisms into vehicles of toxicity or disease without technically inserting recombinant DNA, thus skirting regulatory definitions and the limits of detection.
In the face of these challenges, Sam Evans suggests that “instead of building fences around narrow objects of concerns, we should be building conversations across areas of relevant expertise.” Namely, he highlights approaches where “sensible scientists would turn to when they have a question about the security aspects of their research” as an alternative to the static lists of objects that trigger assessment and oversight, such as the Select Agents Rule and the roster of seven categories of experiments deemed to be of concern. Societal interests would be embedded in the design and conduct of research, as natural scientists reflect on the societal goals of their work with security as one of these goals, and in partnership with security experts. This is a laudable concept; however, I would argue that it will not be a reality without an umbrella of external, legal motivators and the wisdom of outside actors.
The history of environmental releases of genetically engineered organisms suggests that natural scientists are not prone to reflexivity or favorable to scrutiny beyond the norms of scientific integrity. They have balked at the idea of regulation, questioning its necessity and innovating around it through the use of gene editing; labeled as luddites those citizens and stakeholders with concerns about genetically engineered organisms; and discredited scientists who publish studies showing potential risk. What makes us think that biotechnologists with a vested interest in seeing their work progress would feel any differently when it comes to intentional threats (biosecurity) versus unintentional hazards (biosafety)?
A balance must, therefore, be struck between Evans’s model of self-governance in partnership with security experts (in a reflexive approach) and mandatory, legal mechanisms with external checks and balances. However, we are then back to the problem that Select Agent Rules, the Biological Weapons Convention, and other regulations cannot keep pace with advances in genetic engineering, gene editing, and synthetic biology. I suggest that we should consider models from the fields of public administration, risk governance, and environmental management that focus on adaptive, inclusive, engaged, and iterative approaches based in law, but with the flexibility to change with the technologies. These approaches should include the participation of those involved in the research, but not rely on them for prudent vigilance. They should also include different types of external experts who can more holistically and objectively evaluate potential for misuse, including those in ecological sciences, risk analysis, social sciences, political science, psychology, anthropology, ecology, business management, and world history, among others. The first step toward the design of such oversight systems is to dispense with the idea that self-governance by those invested in advanced biotechnologies is the foundation of future biosecurity. Then, the work can begin.
Sam Weiss Evans crafts a compelling argument that our risk governance strategies rely on dangerously oversimplified assumptions about the relationships among science, security, and the state. As someone who has studied and managed safety and security policies within biotechnology research programs, I agree with the author’s assessment of the shortcomings of our governance regime and though he raises important policy considerations, his argument would benefit from a greater focus on implementation.
Evans grounds his argument in a critique of newly enacted US policies for the oversight of Dual Use Research of Concern (DURC). When asking “does DURC work?” he misses an opportunity to examine just what “working” entails. The DURC policies articulate multiple goals and guiding principles including control, monitoring, and awareness building. These goals signal a more complex appreciation of knowledge production and oversight, but they have become muddled in practice. Updates to the policies and their implementation may help realize the objectives that Evans promotes.
If controlling research with security risks is the primary goal of the DURC policies, then the author’s critique is well warranted. In practice, there is considerable ambiguity, uncertainty, and disagreement over the identification of DURC. Narrowing the policy scope to 15 agents creates artificial clarity about the policies’ application at the cost of real confusion about the policies’ purpose. Maintaining a broader scope of oversight could incentivize institutions to learn from a wider range of use cases and expose key gaps and needs.
Rather than controlling research, the DURC policies’ primary goal could be seen as an element of the monitoring regime that Evans promotes. New information collected about potentially concerning research (and researchers) via the policies could factor into broader threat assessments and mitigation plans. The DURC policies include as a stated goal the collection of information that should inform policy updates aimed at managing the risks of research. In practice, the policies lack a mechanism to update policy and an oversight entity to ensure that the data collected is useful and use. If a mechanism existed, the DURC policies might be updated (in concert with other oversight policies) to prompt the collection of additional information, such as accident data, that is important to risk assessments.
An overlooked goal of the DURC policies is raising security awareness. In practice, awareness often translates into rote educational tasks that strive for standard processes (i.e., a “code of conduct”). If DURC policies were more explicitly communicated as incomplete, they could prompt richer interactions among researchers, policy makers, and law enforcement officials. But supporting these interactions requires resources devoted to ongoing and collaborative security governance research in place of box-checking educational modules. In this function, the DURC policies are symbolic, helping to legitimate security as an important consideration of research.
When policies strive to achieve complex goals, we must ensure they don’t fall into foreseeable pitfalls in implementation. DURC policies are incomplete—but recognizing that this is by design can create productive paths forward.
Sam Weiss Evans expresses concern about current efforts to manage risky life sciences research in the United States. He argues that these efforts seemingly rely on faulty assumptions. The author is right to be concerned. Our efforts to manage that small sliver of research with unusually high potential for adverse consequences if misapplied are inadequate and, in some cases, misdirected. In addition, these assumptions, as Evans describes, are clearly faulty. But in practice, the problems are even more complex: although some policy makers, security specialists, and scientists understand that knowledge is not discrete and that social context matters, they operate under perverse incentives, with insufficient tools, and without the benefit of appropriate expertise in social sciences.
Scientific investigation always involves choices about experimental design and approach for answering a question or addressing a hypothesis. Some designs and approaches will be riskier than others in generating information that might be exploited by others to do harm. Usually, scientists consider only technical feasibility, effectiveness, and expediency, because the research enterprise system rewards quick results with high impact and does nothing to reward risk awareness. Admittedly, identifying risk is difficult. Current research oversight policy is narrowly focused on a few specific infectious agents in order to be clear and concrete. We need a more comprehensive and generalized scheme for identifying the kinds of research results that require oversight. Certainly, the identification of risk should also consider the social context in which the work is conducted, but, as well, the unspoken social contract between scientists and the general public that demands avoidance of unnecessary harm.
How can we influence the choices made by scientists in the workplace about the specific questions they ask and the experimental approaches they take? Evans mentions the importance of communicating and making explicit contextual information regarding, for example, threat awareness and beneficial applications. Though helpful, alone this is not enough. Unless there is an understanding of and public discussion about conflicts of interest, we will not recognize selective and biased use of this contextual information. Deliberations about H5N1 avian influenza work in 2012 by the National Science Advisory Board for Biosecurity (of which I was a member) failed to acknowledge such conflicts, nor did they adequately address the timing and real-world delivery of putative benefits.
Additional perspectives and tools should be made available. We need to instill a sense of moral and ethical responsibility among scientists and other parties within the science research enterprise. New approaches (as yet to be described) for effective governance of scientific research are also necessary. Role models and incentives will be crucial. And none of this will work unless it is “forward-deployed”—that is, embraced by those in the “field” and by all those who stand to gain and lose by the conduct of the work about which we care so much.
Chinese technocracy
China is well known for the technocratic character of its political structure and governance. A large number of political leaders either were trained as engineers or had extensive experience working in state-owned technical companies. Liu Yongmou’s article, “The Benefits of Technocracy in China” ( Issues, Fall 2016), offers a good, brief, historical and cultural interpretation of this fact and argues the relevance of technocracy to contemporary Chinese politics. It also challenges the common “antidemocratic” and “dehumanizing” view of technocracy in the West and invites Western scholars to reconsider their oppositions.
Complementing Liu’s argument would be a consideration of the influence of technocracy in current Chinese politics, given the decreased percentage of current politburo members trained in applied science and engineering. One explanation for this shift might be that current leaders were mostly educated after the Cultural Revolution or during the early years of the Reform and Opening-Up, when the focus of national development shifted toward reconstructing the social and political order, which required experts from the humanities and social sciences. Over the past 30 years China has been in transition from a centrally planned economy to a socialist market economy. The national economic system became less centralized, and more state-owned companies were either integrated with private capital or transformed into private firms. Thus, engineers had fewer opportunities to be promoted to higher leadership positions in the government through the meritocratic system, and more engineering students were interested in going to work for private firms where they could earn much higher salaries.
It is important to realize that government workers and Communist Party cadre do not earn the high salaries typical of those working in private corporations. Today, there may also be more political leaders from political science, law, and economics because of China’s increasing interest in promoting social equality and global economic and political influence.
Another complementary topic concerns the connection between technocracy and meritocracy in Chinese politics. Certainly, loyalty to the Party and strong relations with Party leaders are crucial for elite selection and promotion. However, without a certain threshold of competency, including the ability to understand technical and economic indicators for development, anyone in power can have his or her legitimacy challenged by upper-level leaders, peers, subordinates, and the public. Success in managing economic development remains the most important factor for evaluating the performance of political leaders. The elite selection system in China today might be more appropriately called “techno-meritocracy”—that is, the most qualified political leaders are arguably still those who have passed numerous rounds of “tests” on their competency in promoting economic development driven by technological change. Officials may gain power not through a political meritocratic system, but their legitimacy can always be criticized on the basis of technological meritocratic criteria. As Liu Yongmou rightly suggests, this is one of the strengths of the current Chinese techno-meritocratic political system.
There are few, if any, of the “benefits of technocracy in China” described by Liu Yongmou with which I would disagree. I have, in fact, insisted that scientistic and technocratic movements have played a central role in increasing the production of material goods and the effective providing of public services wherever they have been employed, and I am convinced that many public decisions in today’s world unavoidably depend in large part on technically competent advisory input.
Moreover, Liu is undoubtedly correct in arguing that modern technocracy in China, which began with the ascendance of Deng Xiaoping, is consistent with a long-standing Chinese tradition of government by an intellectual elite symbolized by the Confucian call to “exalt the virtuous and the capable.” He is also correct in pointing out that knowledge traditionally was more important than the representation of the interests of the people, and that virtue was privileged over capability, but also that in modern China, while knowledge remains more important than expressions of the interests of the people, the traditional emphasis on virtue has been given lower priority. This is where my view of technocracy in modern China begins to diverge from that of Liu.
The technocratic branch of modern economics, which has been driving Chinese policy, places a high priority on economic growth. Justification for this priority has come from the assumption that in a growing economy labor demand will be high, so wages will be relatively high and the wealth produced will thus be distributed, in significant part, to workers. Yet since the mid-nineteenth century in the advanced industrial world, economic growth has been produced almost exclusively by technological innovations that have had the effect of lowering labor demand and increasing wealth concentration. Technocrats have historically been insensitive to issues of wealth concentration, and that seems to have been true in China. Relatively short-run increases in labor demand are very likely to diminish even as the economy continues to grow, exacerbating the concentration of income that saw China’s Gini coefficient (a measure of income dispersal for which equality = 0 and all income to a single individual = 1) grow from 0.30 in 1980 to 0.55 in 2012.
There is a second downside to a technocratic elite that, as in China, is particularly hostile to criticism and that can, as a result, afford to be narrowly focused. The heavy emphasis on engineering, and more recently on economic, expertise, both of which focus on efficient production, has not, to date, been balanced by expertise in the psychological consequences of intense work environments or on the environmental consequences of focusing exclusively on producing specific material goods. One consequence has been that even where relatively high wage jobs have become available, suicide rates have increased among workers, and the health costs of pollution have exploded.
As Liu Yongmou pointed out, I have argued elsewhere that engineering education has been broadened, making some forms of technocracy more open to considering a broad range of issues ancillary to the primary focus of policy decisions, and that may mitigate some of the negative effects of technocracy in China—but the evidence on this issue is still fragmentary.
Green accounting
In “Putting a Price on Ecosystem Services” ( Issues, Summer 2016), R. David Simpson provides a thoughtful assessment of the concerns raised by the valuation of ecosystem services. I, like Simpson, have been working in this area for close to two decades. I have marveled at the ascent of ecosystem services from obscure terms in the mid-1990s to near ubiquity today, while expressing concern that the concept’s mainstreaming risks meaning all things to all people.
Simpson’s core concern is that “The assertion that ecosystem services are undervalued is repeated so often, and so often uncritically, as to seem almost a mantra.” He points out, rightly, that few studies have credibly provided economic valuations of service provision in the field, and that in many cases service provision may not actually be worth very much. He is certainly correct that location matters. At the same time, if service provision is simply ignored in land use decisions, as is often the case, then their value becomes zero. That is almost certainly incorrect. I would suggest though, that focusing on the inadequacies of service valuation risks missing two larger points.
First, although economic valuation and big numbers may prove rhetorically important in persuading policymakers that they should pay attention to the provision (or loss) of services, calculating their dollar value can often be irrelevant to policy decisions where the key concern is relative cost. In the celebrated story about New York City’s drinking water, for example, officials had to choose between ensuring water quality through a built treatment plant or land use investments in the Catskills watershed. Investing in the Catskills proved much less expensive. In this case and others, the absolute value of the ecosystem service doesn’t matter. The question is whether it is wiser to invest in “built” or “green” infrastructure, and this is relatively easy to calculate. Yes, valuing ecosystem services is hard and we are still not very good at it, but that doesn’t matter when choosing between policy options.
Second, marginal biophysical valuation is usually more important than economic valuation. Following on the previous paragraph, decision makers need to know how much service bang they are getting for the buck. This requires far better understanding of the science of service provision. We know with confidence that paving over an entire wetland can cause water quality problems. But what is the impact on service provision if the development removes just 10% or 20% of the wetland? This type of piecemeal loss is where most land use decisions take place; yet the science in this field remains nascent. Contrast this, for example, with our understanding of marginal productivity for agricultural lands. We are very good at managing land to provide ever more food per acre. We simply do not have similar experience explicitly managing land to provide services or prevent their loss.
Simpson rightly cautions over the need for rigorous and credible economic valuation, but it is equally important to recognize that, for many land use decisions in the field, absolute valuation is less important than the relative costs of service provision and biophysical valuation.