2 thoughts on “Climate Models as Economic Guides: Scientific Challenge or Quixotic Quest?

  1. Ron Kenett

    The authors provide a frank and incisive review of discussions and scientific analysis on climate change. They warn us of the uncertainties in the predictions of global warming models. Climate models are designed to produce information. My comment is to suggest reading the various comments of the authors from a perspective of information quality, or what is called InfoQ in Kenett and Shmueli, 2014. InfoQ ties the goal, data, analysis and utility of an empirical study. It is deconstructed in eight dimensions: i) Data Resolution, ii) Data Structure, iii) Data Integration, iv) Temporal Relevance, v) Generalizability, vi) Chronology of Data and Goal, vii) Operationalization, and viii) Communication. To assess InfoQ, one needs to evaluate these eight dimensions in the context of specific goals and objectives. The authors focus in their review on the lack of generalizability and limitations in several of the global warming publications, if one is interested in formulating policies, which affect the economic scene. They state that: “ensembles” are not in any sense representative of the range of possible (and plausible) models that fit the data, which implies lack of generalizability. Another similar statement is that: the sensitivity analysis varies only a subset of the assumptions and only one at a time. That precludes interactions among the uncertain inputs, which may be highly relevant to climate projections. This also indicates poor generalizability. In terms of operationalization, the authors distinguish policy simulation from policy justification. The operationalization of the climate model in terms of justification is the problematic part the authors want to emphasize. An InfoQ assessment of the various studies quoted by the authors can help further elucidate the difference between scientific insight and evidence for policymaking.

    The underlying approach of the authors is scientific. The assumption is that the correct view of an issue such as climate change should be evidence based. Unfortunately, many forces are now participating in this controversial field with apparent collateral damage. See for example the blog on how the education system in the UK is impacted by such discussions: https://tthomas061.wordpress.com/2014/04/09/climate-catastrophism-for-kiddies/

    If we aim to be “evidence based” and “scientific”, than, what the authors write provides an excellent perspective. To help focus the discussion, one might want to bring in the perspective of information quality that combines generalization and operationalization, two critical aspects of the global warming debate. Even without that, the authors should be gratefully thanked for insightful contributions.

    Kenett, R.S. and Shmueli, G. (2014), On information quality. Journal of the Royal Statistical Society, Series A (Statistics in Society) with discussion, 177(1), 3-27.

    Reply
  2. Larry Kummer

    The authors are to be commended for this, perhaps the clearest statement to date about the limitations of climate models for public policy decision-making. But will this change any minds in this now multi-decade debate? Or will advocates of using climate models respond with further arguments?

    Unless we try something new, it seems likely that the weather will prove how well the models “work” – allowing potentially large or even catastrophic outcomes.

    Karl Popper said, in effect, that predictions are the gold standard for testing theories. We can test the models used in the first three IPCC assessment reports by running them with emissions data since their publication (actual data, not the scenarios of future data originally used).

    This would be a fair test acceptable to all sides, avoiding the objections to the validation exercises done using hindcasts (e.g., were models tuned to past data?) and comparisons of the IPCC’s Assessment Repots’ forecasts with actual temperatures (e.g., how close were the scenarios to actual emissions?).
    Such a test can be run in a few months and at a moderate cost. It might break the policy gridlock. Whatever the results, we’d know more than we do today.

    The peer-reviewed literature has some sketchy versions of such tests, combining old data, short time durations, and poor documentation. Considering the stakes, such a test must be run with full transparency and using all data to the present (to avoid concern about “check-picking” the period examined).

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

CAPTCHA Time limit is exhausted. Please reload CAPTCHA.