Search
Close this search box.

New paper in Nature Climate Change says IPCC uses statistical techniques ‘out of date by well over a decade’

New paper in Nature Climate Change says IPCC uses statistical techniques ‘out of date by well over a decade’

http://hockeyschtick.blogspot.com/2013/09/new-paper-in-nature-climate-change-says.html

A new paper published in Nature Climate Change finds, “Use of state-of-the-art statistical methods could substantially improve the quantification of uncertainty in [IPCC] assessments of climate change” and that “The forthcoming Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5) and the US National Climate Assessment Report will not adequately address this issue. Worse still, prevailing techniques for quantifying the uncertainties that are inherent in observed climate trends and projections of climate change are out of date by well over a decade. Modern statistical methods and models could improve this situation dramatically.” The authors recommend, “Including at least one author with expertise in uncertainty analysis on all chapters of IPCC and US national assessments” and that the IPCC “Replace qualitative assessments of uncertainty with quantitative ones.” A prime example of this would be the ludicrous IPCC claim that it is “very likely” man is the cause of climate change.

Box 1: Recommendations to improve uncertainty quantification.

Replace qualitative assessments of uncertainty with quantitative ones.
Reduce uncertainties in trend estimates for climate observations and projections through use of modern statistical methods for spatio-temporal data.
Increase the accuracy with which the climate is monitored by combining various sources of information in hierarchical statistical models.
Reduce uncertainties in climate change projections by applying experimental design to make more efficient use of computational resources.
Quantify changes in the likelihood of extreme weather events in a manner that is more useful to decision-makers by using methods that are based on the statistical theory of extreme values.
Include at least one author with expertise in uncertainty analysis on all chapters of IPCC and US national assessments.

Uncertainty analysis in climate change assessments

Richard W. Katz,
Peter F. Craigmile,
Peter Guttorp,
Murali Haran,
Bruno Sansó
& Michael L. Stein

Affiliations
Corresponding author

Nature Climate Change 3, 769–771 (2013) doi:10.1038/nclimate1980

Published online
 28 August 2013

Article tools
Use of state-of-the-art statistical methods could substantially improve the quantification of uncertainty in assessments of climate change.

Because the climate system is so complex, involving nonlinear coupling of the atmosphere and ocean, there will always be uncertainties in assessments and projections of climate change. This makes it hard to predict how the intensity of tropical cyclones will change as the climate warms, the rate of sea-level rise over the next century or the prevalence and severity of future droughts and floods, to give just a few well-known examples. Indeed, much of the disagreement about the policy implications of climate change revolves around a lack of certainty. The forthcoming Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5) and the US National Climate Assessment Report will not adequately address this issue. Worse still, prevailing techniques for quantifying the uncertainties that are inherent in observed climate trends and projections of climate change are out of date by well over a decade. Modern statistical methods and models could improve this situation dramatically.

Uncertainty quantification is a critical component in the description and attribution of climate change. In some circumstances, uncertainty can increase when previously neglected sources of uncertainty are recognized and accounted for (Fig. 1 shows how uncertainty can increase for projections of sea-level rise). In other circumstances, more rigorous quantification may result in a decrease in the apparent level of uncertainty, in part because of more efficient use of the available information. For example, despite much effort over recent decades, the uncertainty in the estimated climate sensitivity (that is, the long-term response of global mean temperature to a doubling of the CO2 concentration in the atmosphere) has not noticeably decreased1. Nevertheless, policymakers need more accurate uncertainty estimates to make better decisions2.

Results are from 18 models included in the CMIP5 experiment24. The median climate model projection with no uncertainty is indicated by the vertical grey line, and uncertainty due to different climate model projections is shown by the histogram (white bars). The coloured curves represent cumulative uncertainty taking into account errors from the prediction of global mean sea-level rise from global mean temperature (red line), from the relation between Seattle sea-level rise and global mean sea-level rise (blue line) and from that between Seattle and Olympia sea-level rise (green line). Figure courtesy of Peter Guttorp, University of Washington.

Full size image (84 KB)

Detailed guidance provided to authors of the IPCC AR5 and the US National Climate Assessment Report emphasizes the use of consistent terminology for describing uncertainty for risk communication. This includes a formal definition of terms such as ‘likely’ or ‘unlikely’ but, oddly, little advice is given about what statistical techniques should be adopted for uncertainty analysis3, 4. At the least, more effort could be made to encourage authors to make use of modern techniques.

Historically, several compelling examples exist in which the development and application of innovative statistical methods resulted in breakthroughs in the understanding of the climate system (for example, Sir Gilbert Walker’s research in the early twentieth century related to the El Niño–Southern Oscillation phenomenon5). We anticipate that similar success stories can be achieved for quantification of uncertainty in climate change.

Although climate observations and climate model output have different sources of error, both exhibit substantial spatial and temporal dependence. Hierarchical statistical models can capture these features in a more realistic manner6. These models adopt a ‘divide and conquer’ approach, breaking the problem into several layers of conceptually and computationally simpler conditional statistical models. The combination of these components produces an unconditional statistical model, whose structure can be quite complex and realistic. By using these models, uncertainty in observed climate trends and in projections of climate change can be substantially decreased. This decrease is obtained through ‘borrowing strength’, which exploits the fact that trends or projections ought to be similar at adjacent locations or grid points. Methods that are currently applied usually involve analysing the observations for each location — or the model output for each grid point — separately. Hierarchical statistical models can also be applied to combine different sources of climate information (for example, ground and satellite measurements), explicitly taking into account that they are recorded on different spatial and temporal scales7, 8.

Even without any increase in computational power, statistical principles of experimental design can reduce uncertainty in the climate change projections produced by climate models through more efficient use of these limited resources9. Rather than allocating them uniformly across all combinations of the alternatives being examined, the allocation can be made in a manner that maximizes the amount of information obtained. For example, in assessing the impact of different global and regional climate models on climate projections, we do not need to examine all combinations of these models.

The recent IPCC Special Report Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation10, on which the IPCC AR5 relies, does not take full advantage of the well-developed statistical theory of extreme values11. Instead, many results are presented in terms of spatially and temporally aggregated indices and/or consider only the frequency, not the intensity of extremes. Such summaries are not particularly helpful to decision-makers. Unlike the more familiar statistical theory for averages, extreme value theory does not revolve around the bell-shaped curve of the normal distribution. Rather, approximate distributions for extremes can depart far from the normal, including tails that decay as a power law. For variables such as precipitation and stream flow that possess power-law tails, conventional statistical methods would underestimate the return levels (that is, high quantiles) used in engineering design and, even more so, their uncertainties. Recent extensions of statistical methods for extremes make provision for non-stationarities such as climate change. In particular, they now provide non-stationary probabilistic models for extreme weather events, such as floods, as called for by Milly and colleagues12.

We mention just one concrete example for which reliance on extreme value methods could help to resolve an issue with important policy implications. It has recently been claimed that, along with an increase in the mean, the probability distribution of temperature is becoming more skewed towards higher values13, 14. But techniques based on extreme value theory do not detect this apparent increase in skewness15. Any increase is evidently an artefact of an inappropriate method of calculation of skewness when the mean is changing16.

Besides uncertainty quantification, the communication of uncertainty to policymakers is an important concern. Several fields conduct research on this topic, in addition to statistics (including decision analysis and risk analysis). Statisticians have made innovative contributions to the graphic display of probabilities to make communication more effective17, methods that have not yet found much, if any, use in climate change assessments (see Spiegelhalter et al.17 for an example of how uncertainties about future climate change are now presented).

It should be acknowledged that statisticians alone cannot solve these challenging problems in uncertainty quantification. Rather, increased collaboration between statistical and climate scientists is needed. Examples of current activities whose primary purpose is to stimulate such collaborations include: CliMathNet18, the Geophysical Statistics Project at the National Center for Atmospheric Research19, International Meetings on Statistical Climatology20, the Nordic Network on Statistical Approaches to Regional Climate Models for Adaptation21 and the Research Network for Statistical Methods for Atmospheric and Oceanic Sciences22.

Recommendations

Recommendations•
 
References•
 
Author information

To bring uncertainty quantification into the twenty-first century, we offer a number of suggestions (Box 1). Elaborated on here, these range from how climate change research is conducted to the process by which climate change assessments are produced.

Box 1: Recommendations to improve uncertainty quantification. [see above]

Full box

Whenever feasible, qualitative uncertainty assessments should be replaced with quantitative ones. At a minimum, a standard error should be attached to any estimate, along with a description of how it was calculated. Ideally, an entire probability distribution should be provided — innovative graphical techniques can be used to communicate these uncertainties in a more effective manner. In addition, modern statistical methods for spatio-temporal data, such as hierarchical models, should be used to reduce uncertainties in trend estimates for climate observations and projections of climate change. By taking into account spatial and temporal dependence, these techniques provide a powerful tool for detection of observed and projected changes in climate. To increase the accuracy with which the climate is monitored, various sources of information need to be combined using hierarchical statistical models. Such techniques can take into account differences in the uncertainties of, for instance, in situ and remotely sensed measurements.

Statistical principles of experimental design should be applied to climate change experiments using numerical models of the climate system. Through making more efficient use of computational resources, uncertainties in climate change projections can be reduced. Methods based on the statistical theory of extreme values should be used to quantify changes in the likelihood of extreme weather events, whether based on climate observations or on projections from climate models. In this way, information more useful to decision-makers about the risk of extreme events (for example, in terms of changing return levels) can be provided. Finally, to improve the quality of the treatment of uncertainty, at least one author with expertise in uncertainty analysis should be included on all chapters of IPCC and US national assessments. These authors could come from the field of statistics, as well as from other related fields including decision analysis and risk analysis.

If these recommendations are adopted, the improvements in uncertainty quantification would thereby help policymakers to better understand the risks of climate change and adopt policies that prepare the world for the future.

Sent by gReader Pro

Share: