In Numbers We Trust
Review of
Trust in Numbers: The Pursuit of Objectivity in Science and Public Life
Princeton, N.J.: Princeton University Press, 1995. 311 pages
Our scientific culture, and much of our public life, is based on trust in numbers. They are commonly accepted as the means to achieving objectivity in analysis, certainty in conclusions, and truth. Numbers tell us about the health of our society (as in the rates of occurrence of unwanted behavior), and they provide a demarcation between what is accepted as safe and what is believed to be dangerous. In Trust in Numbers, Theodore Porter, an associate professor of history at UCLA and the author of The Rise of Statistical Thinking, 1820-1900 (Princeton, 1986), unpacks this assumption and uses history to show how such a trust may sometimes be based less on the solidity of the numbers themselves than on the needs of expert and client communities. The pursuit of objectivity through numbers defines modern public policy rather like the pursuit of happiness defines modern private life; and neither pursuit is guaranteed to lead to simple success.
In looking critically at the rigor of quantitative analysis, the book treads on ground that has recently become very delicate. One initial reaction to this study could be to dismiss (or denounce) it as yet another attempt to “demystify” quantification, thus undermining public faith in scientific objectivity. But the historical approach protects the book against such simplistic interpretations. In the examples, we witness a wide variation among the quantitative arguments and their motivations. Also, we are clearly shown the complexity and nuance in the drives for this sort of objectivity, as well as the different degrees of success that they can achieve.
A great benefit of the historical style is that the book is a very good read. The more extended accounts are like novellas. We learn how in Victorian England the accountants and actuaries skillfully deflected government attempts to introduce standardized methods of accounting for insurance firms. There had already occurred some serious scandals, which were reflected in the fiction of Charles Dickens. But the professionals advocated reliance on their skill and judgment, rather than on public regulation with explicit standards of practice. One of their main arguments was that the public should be spared the unnecessary alarm that could arise from excessive openness; hence they proposed that regulation should be accomplished by informal, private understandings. This British style of regulation (closed, informal, and paternalistic), which still persists, differs starkly from the American system of open, explicit, quantitative, and adversarial regulation.
Neither approach is without flaws. One weakness of the British strategy was apparent in the recent “mad cow” epidemic, which was largely a result of government complacency, including the failure to create a database of affected cattle and herds. The legendary U.S. Army Corps of Engineers was an early user of quantitative cost-benefit analysis in its assessment of navigation and water-control projects. But it was applied with a delicate interplay of the objective and subjective because the Corps had to maintain its reputation for scientific impartiality while recognizing the folkways of the U.S. Congress. The limits of quantification (and of proclaimed objectivity) could be seen when a powerful interest group such as the railways expressed its opposition to waterway improvements. On such occasions discussions took a turn that is now familiar, with the quality of expert testimony becoming more contested than the details of numerical arguments that they put forward.
The increasing complexities of cost-benefit analysis were dramatically revealed during the New Deal period, in the great struggles between the Bureau of Reclamation and the Corps. Each had cost-benefit analyses corresponding to their separate mandates (irrigation and flood control, respectively), and these were manipulated by the competing interests (small versus large farmers). Such unseemly battles within the federal bureaucracy provided an impetus for the development of ever more refined and elaborate methods of cost-benefit analysis, in which “objectivity” is protected by a multitude of standardized numerical assessment routines.
Reading these stories could be very illuminating for someone whose professional training has provided no preparation for the real problems of quantification in practice. For such practitioners (and there are many), Porter’s historical accounts convey a sort of insiders’ knowledge, full of “dirty truths” about errors and pitfalls. This private awareness coexists in a living contradiction with the discipline’s public face of perfect, impersonal objectivity as guaranteed by its numbers. In some fields, such as environmental risk assessment, there is now a vigorous public debate about the numbers, and no one is in any doubt that values and assumptions influence the risk calculations. Yet the field flourishes and is actually strengthened by the admission that it is not covering up hidden weaknesses.
Even physicists are subjective
For research scientists the most important chapter of Trust In Numbers is the very last in which Porter shows that in the most highly developed and leading research communities, the ordinary sort of “objectivity”, as secured by open, refereed publication along with quantitative techniques, is really quite secondary. Among high-energy physicists, there is a community of trust, not blind trust but a highly nuanced evaluation of researchers by each other, which assures the reliability of information. Of course, all the research depends on highly objective and disciplined technical work by lower-status personnel, but the creative researchers themselves employ a “personal knowledge” for making and assessing communications. Indeed, we must ask ourselves how scientifc creativity could flourish in a regime dominated by standardized routines.
A broad, balanced, and philosophically informed history such as this not only provides a supplement to the narrowly technical education that so many scientists and experts receive but also furnishes background materials for a general debate on quality of the numbers that we use in developing public policy. Every society needs its totems. In premodern times the enchanted arts, including the quantitative disciplines of gematria (divination by numbers) and astrology, offered security and demanded trust. We now know that they are but pseudo-sciences, although they do retain a peculiar popularity even among the literate. Their social and cultural background made them plausible; they depended on trust in the gods. Now it is in the numbers that we trust, and scientific credibility is vested in apparent objectivity, achieved through quantification. Of course our modern trust is better founded; but trust, like liberty, needs constant renewal.
I find it disturbing that pseudo-science can exist within our modern culture of scientific objectivity, but the possibility needs to be confronted. The phenomenon of garbage-in/garbage-out is alive and well in various fields of environmental, military, social, and political engineering. When the uncertainties in inputs are not revealed, the outputs of a quantitative analysis become meaningless. We have then entered the realm of pseudo-science where people put faith in numbers just because they are numbers. The numerical calculations used to support the U.S. Strategic Defense Initiative were an example of numeric mystification with no foundation in reality. The old saying, “figures can’t lie, but liars can figure” can now be extended from statistics to a variety of fields.
Porter is to be congratulated for showing how intimate can be the mixture of objectivity and subjectivity, real and pseudo-quantification, awareness and self-deception, and vision and fantasy, in the invocation of trust in numbers. His historical insights can provide the materials we need for a debate on quality in quantities, a debate which is long overdue.