2 thoughts on “When All Models Are Wrong

  1. Mel Jay

    Thank you for a very nice description of the problem.
    But although the suggested improvements to modeling practices
    should be helpful, I have doubts how much they can overcome
    human nature.

    Because I think it is true, as Hugo Mercier discusses in this
    interview on

    in reference to his paper written with Dan Sperber

    Why Do Humans Reason? Arguments for an Argumentative Theory

    Reasoning was not designed to pursue the truth.
    Reasoning was designed by evolution to help us win arguments.

  2. DP Wolf

    Mel Jay is correct in noting that, as biological organisms, humans usually cannot reason about the world in an unbiased way – humans have interests and biases, most of which are not under their conscious control. This is ultimately why politics is what it is, and why the “science-policy interface” of which “post-normal science” is so concerned is polluted with interests and biases (because it involves deliberations among humans). It’s a wonder how the creators and propagators of post-normal science don’t explicitly include the modern evolutionary synthesis and empirical evidence for human nature in their deliberations. Perhaps because it is politically unpopular within European academic and government institutions. Better to just say “socially constructed”.

    I find this list to be amateurish. For example, under the “Focus the analysis” recommendation, they argue that sensitivity analysis should vary all uncertain factors at once, rather than one at a time – this is a curious suggestion both because they do not offer any examples of how this is to be done, and because the title of this recommendation is “focus the analysis” (which indicates absolutely nothing with respect to what they are recommending).

    I also find the broader post-normal science project to be vacuous. If you want to ensure scientific evidence is used in the service of society, encourage collaboration, pluralism, and transparency and rigorously evaluate or “audit” scientific findings. Sensitivity analysis is naturally a part of this process, and Saltelli and Funtowicz are offering nothing new in their “sensitivity audit”, just dressing it up in new language (as usual). As for their notational scheme (NUSAP), it is a tedious bore.

    van der Sluijs writes in their latest book “Science on the Verge”: “A peer review process for quantitative evidence would need to systematically include approaches such as NUSAP, sensitivity auditing in the case of mathematical or statistical models, and in general the exercise of good judgment.” Scrap the NUSAP, realize that “sensitivity auditing” is nothing more than sensitivity analysis, and keep the “good judgment”. Better yet, work on it, your collective judgment is lacking.


Leave a Reply

Your email address will not be published. Required fields are marked *

CAPTCHA Time limit is exhausted. Please reload CAPTCHA.