Physics Envy: Get Over It
Physics studies different things, and in different ways, than other sciences. Understanding those differences is important if we are to have effective science policies.
Physics has long been regarded as the model of what a science should be. It has been the hope, and the expectation, that if sufficient time, resources and talent were put into the sciences concerned with other phenomena—in particular the life sciences and the behavioral and social sciences—the kind of deep, broad and precise knowledge that had been attained in the physical sciences could be attained there too. In science policy discussions there often is the presumption that all good science should be physics-like: distinguished by quantitative specification of phenomena, mathematical sharpness and deductive power of the theory used to explain these phenomena, and above all, a resulting precision and depth of the causal understanding.
Lord Kelvin’s remarks of over a century ago remain the generally held belief regarding the importance of quantification. “When you can measure what you are speaking about and express it in numbers, you know something about it: but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind.” Galileo’s remarks probably are the most famous argument for mathematical theorizing: “The universe…cannot be understood unless one first learns to comprehend the language and interpret the characters in which it is written. It is written in the language of mathematics.”
There is no questioning the amazing success of physics. It has cast a bright light on the underlying structure and operation of the physical world and enabled humans to use that knowledge to develop the technology that helped create the modern world.
But to what extent are quantification and mathematization necessary or sufficient for the remarkable achievements of physics? Certainly they are related. Quantification is an important part of the reason that physics can achieve such precision of description and predictive capability. And the mathematical structure of its theories not only provides amazing sharpness to its explanations and predictions, but also enables productive deduction and calculation to an extent greater than that of any other science. It is no wonder that scientists in other fields often suffer from physics envy, or that policymakers long for similar power in the sciences that bear on the problems they are trying to address.
But this may be a fool’s quest. The nature of the subject matter studied by a science strongly constrains both the methods of research and analysis that are likely to be productive and the nature of the insights that science can achieve. The emphasis on quantification and mathematical theorizing that has been so successful in physics may not be so appropriate in sciences dealing with other kinds of subject matter. Without playing down their value when attainable, neither quantified characterization of subject matter nor mathematical statement of the theory unifying and explaining that subject matter are strictly necessary, or sufficient, for a science to be precise and rigorous. Some sciences have achieved considerable precision and rigor without having these features. Other sciences have embraced these characteristics methodologically, but have not achieved particularly sharp illumination of their subject matters. I am particularly concerned with these latter cases.
For many sciences, the kind of precise, law-like relationships that physics has identified simply may not exist. On the other hand, the more qualitative understanding that these sciences can achieve can be illuminating and practically valuable. A number of the sciences whose insights are strongly needed to help society deal more adequately with the urgent challenges of today’s world are of the latter sort. The tendency of scientists and policymakers to think that such sciences should be creating physics-like knowledge can diminish the ability of society to take advantage of the knowledge that they are capable of providing.
When numbers are not enough
The nature of the subject matter studied by science differs greatly from field to field. As a consequence, the way the phenomena, and the causal mechanisms working on them, can be characterized and understood effectively also differs. And these differences strongly influence the kinds of research methodologies that are likely to be fruitful. This is not recognized as widely and clearly as it should be.
The subject matter that is today addressed by physics is quite special and seems particularly suited to quantitative and mathematical analysis. Thus consider the Newtonian treatment of planetary motion, which continues to serve as a canonical example of successful science. The location of any planet at any time can be completely described in terms of numbers, as can its motion at that time. Its closed path around the sun can be expressed in terms of parameters of the mathematical function that describe the shape of the orbit. Newton’s explanation involves the mass of each planet and the sun, and how they relate to each other, as well as their location, and this also can be expressed in equations and numbers.
The fruitful reduction of apparently complicated and varied phenomena to a set of numbers and equations is the hallmark of physics. Consider modern astrophysics, and in particular the study of whether, and if so the manner in which, the universe is expanding. Things like galaxies, the central objects of observation and analysis in this kind of research, would appear to be enormously complex and heterogeneous, and indeed they are—if one is asking certain kinds of questions about them. But for the purposes of exploring how the universe is expanding, the relevant complexities can be characterized in only a few dimensions, such as mass, age, location, and rate of movement away from the earth. The processes going on within and working on them would appear to be the same throughout the universe. This is the faith of physics as a science and it seems well born out. Astrophysicists are able to characterize what they are studying using numbers and mathematical laws, just as Newton did in his great work.
Physics is remarkable in the extent to which it is able to make predictions (which often are confirmed experimentally) or provide explanations of phenomena based on mathematical calculations associated with the “laws” it has identified. This analytic strategy sometimes is employed to infer the existence or characteristics of something previously unobserved or sometimes not even conceptualized, such as the argument that there must be “dark energy” if one is to understand how the universe is expanding. No other science comes close to physics regarding this kind of theory-based analytic power.
As philosopher Nancy Cartwright has stressed, however, the “laws” of physics hold tightly only under narrow or tightly constrained conditions, such as the vacuum of space or a vacuum in a laboratory. It is not apparent that such tight laws exist, even under controlled conditions, in all arenas of scientific inquiry. That they do seem to exist for physics should be regarded as an important aspect of its subject matter, not a general condition of the world.
But is the quantification of subject matter and mathematical form of theory strictly necessary for a science to be able to make reliable predictions and point the way, as physics so powerfully does, to effective, practical technology?
Consider organic chemistry. Numbers certainly are central aspects of the way elements and molecules are characterized. However, particularly for complex organic compounds, the way molecules are described involves significantly more than just numbers. What atoms are linked to what other atoms, and the nature of their linking, generally is described in figures and words. So too the shapes of molecules. And the characterization of the processes and conditions involved in the forming and breaking up of molecules is to a considerable extent narrative, accompanied by various kinds of flow charts. A part of the “theory” here may be expressed mathematically, but much of it is not.
Or reflect on molecular and cell biology, in particular the exposition of the structure and functioning of DNA, of genes, of gene expression, and the making of proteins. The characterization of DNA resembles the way organic molecules are characterized. Although the helix is a mathematical form, without exception that I know about, the double helix form of DNA is presented as a picture. The base pairs of nucleotides that in different combinations contain the fundamental biological information for the creation of proteins are depicted as strung between two backbone strands, and genes are depicted as sequences of the nucleotides. The processes that lead to the production of a particular protein similarly are described mostly in pictures and flow charts, accompanied by an elaborate verbal narrative.
As biology delves into more macroscopic matters such as how the cell forms proteins, the scientific perspective takes on characteristics that Ernst Mayr has argued differentiate biology from the physical sciences. There often is considerable variation among phenomena classified as being of a particular kind, with individual instances having at least some idiosyncratic elements. And the forces and processes at work tend to vary somewhat from case to case and need to be understood as having a stochastic aspect. The make-up of cells and the details of how a particular protein is made are good examples.
Yet description and prediction of the phenomena studied are quite precise in molecular biology—both those aspects that are basically organic chemistry and those that are more biological (in the sense of Mayr) in nature. Although in neither organic chemistry nor molecular biology does “theory” have anything like the analytic deductive power that theory has in some areas of physics, it does provide an understanding of considerable precision and depth. And as with physics, the kind of understanding that molecular and cell biology has engendered often has provided a clear guide to the practical design of production processes and products. But in organic chemistry and, even more so, molecular biology, although numbers are an aspect of the way phenomena are described, they are far from the full story; nor is mathematics the main language of theory.
In evolutionary biology, differences Mayr highlighted come into still sharper focus. The study of evolution these days takes from molecular biology the understanding of genes and gene mutation. However, its orientation as a field of study in its own right is to phenomena at a much more macro level: the distribution of different phenotypes and genotypes in a population of a species at any time, changes over time in these distributions, and the factors and mechanisms behind those changes.
A certain portion of this characterization is quantitative. Some of the phenotypic characteristics, such as the number of teeth or toes, height and weight, the length of a beak, or speed of movement can be counted or measured, but important phenotypic characteristics such as agility, strength, or attractiveness to the opposite sex may not be readily measurable, or at least not fully captured by numbers. In evolutionary biology, numerical characterization of phenomena, such as Darwin’s description of the beaks of finches on different islands of the Galapagos, almost always is embedded in verbal and sometimes pictorial language, which on the one hand provides a context for interpretation of those numbers and on the other hand contains significant information not included in them.
Not only does a full description of Darwin’s finches include qualitative information, evolutionary biologists also recognize that all finches on a given island are not exactly the same. On average they differ significantly from island to island, and even on a particular island there can be considerable variation. This is very different from the way physicists think of classes of things they study, say electrons, which are understood to have precisely uniform characteristics everywhere they occur. As I’ll explain in more detail below, the presence of a considerable degree of internal heterogeneity in the objects or classes of objects studied by many fields of science distinguishes them from physics in a very important way.
Also, in contrast with Newton’s great work, Darwin’s theory is expressed verbally. And this continues to be the dominant mode of theoretical articulation in modern articles and books concerned with evolutionary biology. Mathematical models are widely used in modern evolutionary biology, but they are not meant to depict, as does theory in physics, how things actually work (even if only under controlled conditions). I recall vividly a conversation I had several years ago with John Maynard Smith regarding the role of his formal models (for example those in his 1982 book Evolution and the Theory of Games) in evolutionary biology. He observed that “evolutionary processes are much more complicated than that. These models are just intellectual tools to help you think about what is going on.”
Yet while the understanding evolutionary biology has given us is largely qualitative, and does not enable sharp prediction of how species will evolve, or the formulation of tight “natural laws” like those Newton discovered, it shapes the way we look at a wide range of empirically observed biological phenomena and processes, and provides a convincing explanation for many. And that understanding is useful, practically. It provides guidance for a range of human efforts, from developing better plant varieties, to understanding and trying to deal with changes in the bacteria that threaten humans, to seeing some of the dangers of environmental changes to which we otherwise would be blind.
Describing the human world
The kinds of understanding that can be attained from the social and behavioral sciences diverge even more from those of the physical sciences. The phenomena studied are almost always very heterogeneous, and classifications somewhat blurry. The kinds of regularities that exist tend to be qualitative and stochastic, rather than sharp and precise.
Much of the description of the phenomena studied by the social sciences is qualitative and verbal. As in biology, the verbal description often is accompanied by numbers, which are intended to give greater precision to such description. However, how much precision is achieved obviously depends on how completely, accurately, and sharply those numbers characterize the phenomena they quantify. Here the situation in the social and behavioral sciences is very different from that in the physical sciences.
A good part of the difference is due to variations in the kinds of phenomena studied. Much of the subject matter treated by the social and behavioral sciences is not only quite heterogeneous, but the general conception of the nature of the phenomenon often has uncertain boundaries, for example, as with “unemployment,” “intelligence,” and “innovativeness.” Phenomena with blurry conceptual edges are not special to the behavioral and social sciences; consider the concept of a biological species where there is cross-species breeding, or the geological conception of an earthquake, which may be accompanied by foreshocks, aftershocks, and ongoing creep of the earth’s crust. In such cases, the phenomenon will often be operationally defined by the numbers used to characterize it. But choice of such numbers has an arbitrariness to it that may conceal the underlying fuzziness.
For example, consider the concept of unemployment, and the statistics used to measure it. What does it mean to be “unemployed?” In the standard measure, people are unemployed if they report that they do not have a job and have actively been seeking work but not finding it. However, the concept of not having a job but actively seeking one obviously has ambiguous boundaries. People who have part time jobs but want and are seeking full time ones are not included in the official unemployment numbers. Also, the official definition of unemployment excludes from the ranks of the unemployed people without a job who have given up hope of finding one and for that reason do not report they are actively searching. And, of course, there is no clear-cut notion of what “actively searching” means.
I am not attacking unemployment statistics. Rather, I want to illustrate that the numbers used by social scientists almost never have the hardness or precision of most of the numbers used in the physical sciences. A good part of the reason for that is that the phenomena studied often are not as sharply defined. As a consequence, analyses of the state of unemployment by knowledgeable analysts almost always involve a verbal, qualitative discussion and several additional numbers, for example the number of part-time workers, to supplement the standard unemployment figure.
In the case of unemployment, at least there are some obvious things to count or try to measure. For many of the subjects studied by the social and behavioral sciences there are no direct ways to count or measure the variables directly being addressed. If these phenomena are to be associated with numbers, some proxies need to be identified or some quantitative indicators constructed.
IQ is a good example. Although the concept of intelligence, and the notion that some people are more intelligent than others, is common and apparently useful in lay circles, there are real problems in laying out a sharp general definition of what intelligence means. In such circumstances there is a tendency to define the concept in terms of how it is measured, but those who know a lot about the subject matter often disagree about whether that definition is appropriate to a concept with fuzzy boundaries and heterogeneous attributes, like intelligence or unemployment. As a consequence, psychologists studying intelligence, and economists studying unemployment, tend to use such standard measures as part of a more general and largely qualitative characterization.
The situation is not dissimilar to that of the study of innovation. Again, a basic issue is that innovation cannot be defined in a way that is broad enough to cover the range of phenomena to which the term seems applicable, while also maintaining clear-cut definitional boundaries. Consider the kind of research involved in trying to assess the innovative performance of different firms in an industry. One plausible research strategy is to work with various written records of innovation in the field and what different firms have done, perhaps supplemented by interviews. The characterization of a firm’s innovativeness that would come out of such a study would be qualitative. However, informed people might be able to agree on at least a rough ranking of firms. And one could try to code indicators of innovativeness in the written records quantitatively and construct the interviews to provide a numerical score for various responses. In addition, one can use published numbers that have some relationship with innovation, such as firm research and development (R&D) expenditures and patents. For many economists, it has proven attractive to focus their research on these kinds of numbers by, for example, exploring the relationship between R&D spending and patenting.
However, R&D expenditures, or patents, have serious limitations as measures of innovative input or output. In many industries, much of innovation goes on through activities that are not counted as R&D. Many innovations are not patented. Qualitative descriptions of the relevant technological histories, and of the role played by different firms and other economic actors in those histories, almost certainly provides not only needed interpretation and context for the numbers but also a way of assessing their meaningfulness. The numbers are part of a description that also is qualitative to a considerable degree. Moreover, paying attention exclusively or largely to such numbers not only ignores or downplays other kinds of knowledge that are at least as relevant, but also can lead to a very distorted view of what is going on.
Complicated is different
Let us reflect again on Newton’s laws of planetary motion. Although planets clearly differ in a large number of ways, for the purpose of understanding their orbits, it turned out to be possible to characterize all planets as basically the same kind of thing, with their differences specifiable in terms of a few quantitative parameters that determined their orbits, given the way that gravity works.
In contrast, all planets have complex surfaces, and the details of their surfaces vary considerably from planet to planet. Characterization of these surfaces, and description of the differences among planets, involves a lot more than a set of numbers. Various theories have been developed over the years to explain why the surface of, say, Mars is what it is. Parts of these theories involve propositions about the physics and chemistry involved, and some of this may be expressed mathematically. But the broad theory that aims to explain the reason for the topography of the surface of Mars, or other planets, is not expressed mathematically, but rather in the form of a narrative. That narrative will tend to refer to the same set of variables and forces as affecting the surfaces of all planets, but the details may differ significantly from planet to planet.
Such studies are not customarily treated as part of physics. Rather, they are much more akin to the phenomena studied and the questions asked by geologists or climate scientists. Or biologists trying to understand macro subjects like evolution and ecology.
Whereas physics can limit the subject matter it addresses so that such heterogeneity is irrelevant to its aims, for other sciences, this diversity or variability is the essence of what they study. Generally some order can be achieved by identifying a limited number of subsets or classes within which the elements are more homogeneous than in the collection as a whole. But in these fields intra-class heterogeneity still tends to be significant, and in many cases the lines between classes is fuzzy. I referred earlier to the concept of “species” in biology as having all of these characteristics.
The issue of significant heterogeneity within a class comes up especially sharply when the questions being asked are concerned with a particular case of a given phenomenon. General scientific knowledge about earthquakes and hurricanes may tell us only a small portion of what we need to know to predict accurately when the next earthquake will hit Berkeley, or whether the next hurricane will hit New York.
The issue of variety within classes and fuzzy class boundaries is especially formidable in the social sciences. The social sciences also have to face the problem that the subject matter they study often changes over time and hence both the entities they study and the basic causal relationships affecting these variables may be different today than they were last year, or a decade ago. Thus economists long have been interested in the relationships between market structure and innovation. However, there are many different kinds of innovation, and a large number of ways in which industry structures differ. Further, over time the nature of the important technologies tends to change, as do the dominant ways firms are organized and the modes of competition. It is not surprising that economists have been unable to find any tight stable “laws” that govern how innovation relates to market structure.
Most emphatically this is not to argue that research on subjects such as innovation and earthquakes cannot come up with general understanding that can be of significant practical value. As noted, evolutionary biology is a very successful science in this respect. Regarding innovation, one important finding is directly related to the heterogeneity and stochastic nature of the subject matter: because of the lack of predictability of which specific innovation paths will work best, a variety of efforts in a field is an almost essential condition for significant progress to be made. In the judgment of at least some economists, this fact provides a much more powerful argument for economic policies that encourage competitive market structures and relatively easy entry into an industry than the static arguments in most economic textbooks. Another important finding is that new firms often play a much more important role in generating significant innovations when a technology is young than when it is more mature. This kind of knowledge is extremely relevant to firms trying to map out an R&D strategy, and to policymakers guiding government decisions on issues ranging from science policy to anti-trust.
What we can expect from science
I have been arguing that sciences that study phenomena that vary significantly from instance to instance, with each instance itself influenced by many factors that also often are quite heterogeneous, are very different from physics. This argument has significant practical implications, because the types of scientific research that focus on understanding such phenomena are particularly common in what Donald Stokes has labeled as “Pasteur’s Quadrant.” Such research is concerned with phenomena that we want to understand better, not simply because we are curious, but also because of our belief that better understanding will help us to deal more effectively with practical problems. The latter objective often imposes a serious constraint on the degree of simplification of the subject matter that scientific research can create through controlled experiments, or assume in theorizing, while still generating understanding that meets the perceived needs that motivate the research. Yet, in science aimed at providing such knowledge, we should not be surprised if the results obtained in one study differ significantly from those obtained in another, even if both appear to be done in similar contexts, in similar ways, and with similar close attention to scientific rigor.
This problem is acutely familiar in the kind of knowledge that has been won by scientific research concerned with deepening our understanding of and ability to predict patterns of global warming. Over the past quarter century research has gained for us a significant increase in our knowledge about historical climate trends and patterns as well as the conditions and forces that seem to be behind these changes.
But the ability to predict many important types of changes (such as future greenhouse gas emissions levels), or to assess the effects of various changes (future rainfall patterns, for example), not to mention the costs—and benefits—of efforts to reduce emission, is limited, and different models are based on somewhat different assumptions, but all basically compatible with what we know scientifically, yield different predictions.
What I want to emphasize here is that whereas the general knowledge about climate change and its causes is strong, specific knowledge about future effects and their timing is not only weak, but inevitably so. But the expectation that climate science should be like physics has fostered the expectation that the research should provide physics-like precision and accuracy. These false expectations lead to an inappropriate attention—by scientists, policymakers, and the interested public alike—to questions of uncertainty that are unlikely ever to be resolved because of the nature of the phenomena being studied.
The same characteristics are obtained in the biomedical sciences. Certainly scientific research has won for us important and reliable knowledge about the causes of many of the ailments that used to devastate humankind, and in many cases this knowledge has served as the basis for the development of effective methods for dealing with those diseases. However, the precision and power of scientific understanding often is relatively limited.
A good part of the reason is that broad disease categories, like cancer or dementia, are very heterogeneous, both regarding the precise nature of the ailment and the causes that generated it. Science struggles with this heterogeneity by trying to divide up the variety into sub-classes that are more homogenous. But the history of such research shows that almost always there continues to be considerable heterogeneity within even the finer disease classifications.
Both because the causes, and also the pathways, of many diseases are multiple, and vary from case to case, scientific understanding of the disease is likely not to be strong enough to point clearly to effective treatment. As a consequence, much of the relatively effective medicine we now have owes its origins not so much to scientific understanding as to trial and error learning of what works. Sometimes this learning occurs through deliberate experimentation, but in many cases, it is almost as a matter of accident. And we have limited understanding of why a number of those treatments work as they do, even if we have strong evidence that they do work.
Also, for many diseases what works for some patients does not work well for others. In some of these cases we have a reasonably good understanding of the patient characteristics and other variables that are associated with effectiveness of particular treatments, and when we do, this can be built into the statistical design for testing different treatments. But in many cases we have little understanding of the characteristics of patients and other factors that cause different responses to a treatment.
Many of these characteristics hold even more strongly in research aimed to improve the effectiveness of educational practice. Practices that perform well in a particular controlled setting (e.g. a laboratory school) very often do not work well when they are tried out in another setting. It also is clear that different children learn in different ways, and different teachers are good at different modes of teaching. Despite this variety, there has developed over the years a body of “common sense” understanding of generally good and generally bad teaching practice. Much of this is the result of professional experience. Some has been won, or at least brought into brighter light, through research. A good example is the relatively recent comprehension that what children have learned in their very early years has a strong and lasting effect on what they learn in school. But here, too, the enhanced understanding is the result of careful examination of experience rather than knowledge deduced from deeper causes.
Research in child development psychology did, indeed, provide a basis for suspecting the importance of early childhood learning. On the other hand, virtually the only reference to research results in brain science that one finds in the education literature is evidence that the physical brains of children develop very rapidly at an early age. Although it long has been hoped that growing scientific knowledge in fields that would seem to be foundational to understanding of how children learn would contribute greatly to our ability to identify better teaching practice, there would appear to be few cases where it has.
In recent years the cutting edge of the design of empirical research on the efficacy of educational practices has been to assign students randomly to the practice being studied, and to compare how they do with the performance of a control group. This research strategy provides a way of assessing whether a generally broadly defined way of doing something is on average efficacious in comparison with another practice or treatment or arrangement. However, when applied to practice in education there almost always are differences in exactly what a nominal practice is between schools in a school system, or even within schools between classrooms. And almost always, students vary in their responses to different practices. Further, there are good reasons to doubt that another empirical study done in ways that are as similar as possible would yield results just like the first study.
This is not to argue that random assignment testing is useless as a tool for helping us to improve educational practice. In some cases, scientific testing may show large and quite general differences in the efficacy of particular methods of teaching or modes of organizing schools that, although noticed by some, have not been widely recognized and accepted. In other cases, evidence that a particular technique seems consistently to benefit a certain class of students, if not all students, can be helpful in inducing more effective tailoring of teaching.
Beyond physics envy
When studying phenomena that tend to lie with Pasteur’s Quadrant, it is a mistake to think one can get the precision or generality of knowledge that one expects from physics or chemistry. The kinds of numbers one can estimate and work with in many fields tend to be at best approximate and incomplete indicators of what we would like to know about, rather than precise measures. As a consequence, almost always they can be understood only in a richer context of description and narrative. In research on these kinds of subjects and questions, qualitative description and explanation should not be regarded as an inferior form of scientific understanding with the aim of research to replace them with numbers, but rather as a vital aspect of our understanding that numbers can complement but not replace. A set of numbers without such a qualitative context—the result of research that one might call naked econometrics—is likely to be worthless as a guide to policy, or worse.
A good case in point is the long-standing objective of policymakers and scholars to measure a “rate of return” on public R&D spending in a field of activity or a government program. Any such numbers that are calculated are bound to be highly sensitive to the very particular and somewhat arbitrary assumptions that enabled them to be generated, and the particularities of the context for which they were estimated. I would argue that, taken alone, they can tell policymakers nothing of value. On the other hand, if those that generate those numbers, and those who interpret them, know a good deal about the particular programs and activities involved, and have a good feel for just what kind of knowledge or capability the research generated, and the numbers were generated in the light of that understanding, then those numbers may sensibly be interpreted as providing an indicator of what the research accomplished.
An important consequence of my argument is that productive advance in practice in many of the arenas that sciences in Pasteur’s Quadrant aim to facilitate requires a significant interaction between learning through research by scientists and learning by doing on the part of those involved in policymaking and implementation. The research enterprise needs not only to light the way to better practice, but to try to understand at a deeper level what has been learned in practice. I note that this is a well-recognized characteristic of effective fields of engineering research and biomedical research. It is a less-well-recognized need in the social and behavioral sciences, but there have been important moves to establish better two-way interaction in fields like education. And I would argue that this very much needs to be done regarding studies that evaluate research programs.
Research programs that are justified in terms of their potential contributions to solving practical problems should be designed with clear awareness of the strengths and weakness of the sciences involved—and scientists and policymakers alike should temper their expectations accordingly. High-level insights of considerable power, such as our knowledge about the importance of early childhood education or the human influence on climate, can provide valuable general guidance for our policies. But expecting science to achieve physics-like, quantified precision that can allow us to optimize policies in domains as diverse as cancer treatment, industrial innovation, K-12 education, and environmental protection is a fantasy. Here we will need to focus on improving our processes of democratic decision making.
Richard R. Nelson is director of the Center for Science, Technology, and Global Development at the Columbia Earth Institute, and Professor Emeritus of International and Public Affairs, Business, and Law at Columbia University.