Search
Close this search box.

Search Results for: Global warming hiatus – Page 10

Climatologist Dr. Pat Michaels: ‘Homogenized’ US Warming Trend May Be Grossly Exaggerated

For much of the last year, Washington has been abuzz with rumors that NOAA manipulated the global temperature records to get them to “disappear” the “hiatus” in global warming since the mid-1990s, a phenomenon that is obvious in global satellite data. Congressman Lamar Smith (R-TX), chair of the House Committee on Science, Space and Technology, seems to smelling smoke over this. It appears that Anthony Watts has found the fire.

No global warming at all for 18 years 9 months – a new record – The Pause lengthens again – just in time for UN Summit in Paris

Special To Climate Depot The Pause lengthens again – just in time for Paris No global warming at all for 18 years 9 months – a new record By Christopher Monckton of Brenchley As the faithful gather around their capering shamans in Paris for the New Superstition’s annual festival of worship, the Pause lengthens yet again. One-third of Man’s entire influence on climate since the Industrial Revolution has occurred since February 1997. Yet the 225 months since then show no global warming at all (Fig. 1). With this month’s RSS temperature record, the Pause beats last month’s record and now stands at 18 years 9 months. Figure 1. The least-squares linear-regression trend on the RSS satellite monthly global mean surface temperature anomaly dataset shows no global warming for 18 years 9 months since February 1997, though one-third of all anthropogenic forcings have occurred during the period of the Pause. The accidental delegate from Burma provoked shrieks of fury from the congregation during the final benediction in Doha three years ago, when he said the Pause had endured for 16 years. Now, almost three years later, the Pause is almost three years longer. It is worth understanding just how surprised the modelers ought to be by the persistence of the Pause. NOAA, in a very rare fit of honesty, admitted in its 2008 State of the Climate report that 15 years or more without global warming would demonstrate a discrepancy between prediction and observation. The reason for NOAA’s statement is that there is supposed to be a sharp and significant instantaneous response to a radiative forcing such as adding CO2 to the air. The steepness of this predicted response can be seen in Fig. 1a, which is based on a paper on temperature feedbacks by Professor Richard Lindzen’s former student Professor Gerard Roe in 2009. The graph of Roe’s model output shows that the initial expected response to a forcing is supposed to be an immediate and rapid warming. But, despite the very substantial forcings in the 18 years 9 months since February 1997, not a flicker of warming has resulted. Figure 1a: Models predict rapid initial warming in response to a forcing. Instead, no warming at all is occurring. Based on Roe (2009). At the Heartland and Philip Foster events in Paris, I shall reveal in detail the three serious errors that have led the models to over-predict warming so grossly. The current el Niño, as Bob Tisdale’s distinguished series of reports here demonstrates, is at least as big as the Great el Niño of 1998. The RSS temperature record is beginning to reflect its magnitude. From next month on, the Pause will probably shorten dramatically and may disappear altogether for a time. However, if there is a following la Niña, as there often is, the Pause may return at some time from the end of next year onward. The hiatus period of 18 years 9 months is the farthest back one can go in the RSS satellite temperature record and still show a sub-zero trend. The start date is not cherry-picked: it is calculated. And the graph does not mean there is no such thing as global warming. Going back further shows a small warming rate. And yes, the start-date for the Pause has been inching forward, though just a little more slowly than the end-date, which is why the Pause continues on average to lengthen. So long a stasis in global temperature is simply inconsistent not only with the extremist predictions of the computer models but also with the panic whipped up by the rent-seeking profiteers of doom rubbing their hands with glee in Paris. The UAH dataset shows a Pause almost as long as the RSS dataset. However, the much-altered surface tamperature datasets show a small warming rate (Fig. 1b). Figure 1b. The least-squares linear-regression trend on the mean of the GISS, HadCRUT4 and NCDC terrestrial monthly global mean surface temperature anomaly datasets shows global warming at a rate equivalent to 1.1 C° per century during the period of the Pause from January 1997 to September 2015. Bearing in mind that one-third of the 2.4 W m–2 radiative forcing from all manmade sources since 1750 has occurred during the period of the Pause, a warming rate equivalent to little more than 1 C°/century is not exactly alarming. As always, a note of caution. Merely because there has been little or no warming in recent decades, one may not draw the conclusion that warming has ended forever. The trend lines measure what has occurred: they do not predict what will occur. The Pause – politically useful though it may be to all who wish that the “official” scientific community would remember its duty of skepticism – is far less important than the growing discrepancy between the predictions of the general-circulation models and observed reality. The divergence between the models’ predictions in 1990 (Fig. 2) and 2005 (Fig. 3), on the one hand, and the observed outturn, on the other, continues to widen. If the Pause lengthens just a little more, the rate of warming in the quarter-century since the IPCC’s First Assessment Report in 1990 will fall below 1 C°/century equivalent. Figure 2. Near-term projections of warming at a rate equivalent to 2.8 [1.9, 4.2] K/century, made with “substantial confidence” in IPCC (1990), for the 309 months January 1990 to September 2015 (orange region and red trend line), vs. observed anomalies (dark blue) and trend (bright blue) at just 1.02 K/century equivalent, taken as the mean of the RSS and UAH v.6 satellite monthly mean lower-troposphere temperature anomalies. Figure 3. Predicted temperature change, January 2005 to September 2015, at a rate equivalent to 1.7 [1.0, 2.3] Cº/century (orange zone with thick red best-estimate trend line), compared with the near-zero observed anomalies (dark blue) and real-world trend (bright blue), taken as the mean of the RSS and UAH v.6 satellite lower-troposphere temperature anomalies. As ever, the Technical Note explains the sources of the IPCC’s predictions in 1990 and in 2005, and also demonstrates that that according to the ARGO bathythermograph data the oceans are warming at a rate equivalent to less than a quarter of a Celsius degree per century. In a rational scientific discourse, those who had advocated extreme measures to prevent global warming would now be withdrawing and calmly rethinking their hypotheses. However, this is not a rational scientific discourse. On the questioners’ side it is rational: on the believers’ side it is a matter of increasingly blind faith. The New Superstition is no fides quaerens intellectum. Key facts about global temperature These facts should be shown to anyone who persists in believing that, in the words of Mr Obama’s Twitteratus, “global warming is real, manmade and dangerous”. The RSS satellite dataset shows no global warming at all for 225 months from February 1997 to Octber 2015 – more than half the 442-month satellite record. There has been no warming even though one-third of all anthropogenic forcings since 1750 have occurred since the Pause began in February 1997. The entire RSS dataset for the 442 months December 1978 to September 2015 shows global warming at an unalarming rate equivalent to just 1.13 Cº per century. Since 1950, when a human influence on global temperature first became theoretically possible, the global warming trend has been equivalent to below 1.2 Cº per century. The global warming trend since 1900 is equivalent to 0.75 Cº per century. This is well within natural variability and may not have much to do with us. The fastest warming rate lasting 15 years or more since 1950 occurred over the 33 years from 1974 to 2006. It was equivalent to 2.0 Cº per century. Compare the warming on the Central England temperature dataset in the 40 years 1694-1733, well before the Industrial Revolution, equivalent to 4.33 C°/century. In 1990, the IPCC’s mid-range prediction of near-term warming was equivalent to 2.8 Cº per century, higher by two-thirds than its current prediction of 1.7 Cº/century. The warming trend since 1990, when the IPCC wrote its first report, is equivalent to 1 Cº per century. The IPCC had predicted close to thrice as much. To meet the IPCC’s central prediction of 1 C° warming from 1990-2025, in the next decade a warming of 0.75 C°, equivalent to 7.5 C°/century, would have to occur. Though the IPCC has cut its near-term warming prediction, it has not cut its high-end business as usual centennial warming prediction of 4.8 Cº warming to 2100. The IPCC’s predicted 4.8 Cº warming by 2100 is well over twice the greatest rate of warming lasting more than 15 years that has been measured since 1950. The IPCC’s 4.8 Cº-by-2100 prediction is four times the observed real-world warming trend since we might in theory have begun influencing it in 1950. The oceans, according to the 3600+ ARGO buoys, are warming at a rate of just 0.02 Cº per decade, equivalent to 0.23 Cº per century, or 1 C° in 430 years. Recent extreme-weather events cannot be blamed on global warming, because there has not been any global warming to speak of. It is as simple as that. Technical note Our latest topical graph shows the least-squares linear-regression trend on the RSS satellite monthly global mean lower-troposphere dataset for as far back as it is possible to go and still find a zero trend. The start-date is not “cherry-picked” so as to coincide with the temperature spike caused by the 1998 el Niño. Instead, it is calculated so as to find the longest period with a zero trend. The fact of a long Pause is an indication of the widening discrepancy between prediction and reality in the temperature record. The satellite datasets are arguably less unreliable than other datasets in that they show the 1998 Great El Niño more clearly than all other datasets. The Great el Niño, like its two predecessors in the past 300 years, caused widespread global coral bleaching, providing an independent verification that the satellite datasets are better able than the rest to capture such fluctuations without artificially filtering them out. Terrestrial temperatures are measured by thermometers. Thermometers correctly sited in rural areas away from manmade heat sources show warming rates below those that are published. The satellite datasets are based on reference measurements made by the most accurate thermometers available – platinum resistance thermometers, which provide an independent verification of the temperature measurements by checking via spaceward mirrors the known temperature of the cosmic background radiation, which is 1% of the freezing point of water, or just 2.73 degrees above absolute zero. It was by measuring minuscule variations in the cosmic background radiation that the NASA anisotropy probe determined the age of the Universe: 13.82 billion years. The RSS graph (Fig. 1) is accurate. The data are lifted monthly straight from the RSS website. A computer algorithm reads them down from the text file and plots them automatically using an advanced routine that automatically adjusts the aspect ratio of the data window at both axes so as to show the data at maximum scale, for clarity. The latest monthly data point is visually inspected to ensure that it has been correctly positioned. The light blue trend line plotted across the dark blue spline-curve that shows the actual data is determined by the method of least-squares linear regression, which calculates the y-intercept and slope of the line. The IPCC and most other agencies use linear regression to determine global temperature trends. Professor Phil Jones of the University of East Anglia recommends it in one of the Climategate emails. The method is appropriate because global temperature records exhibit little auto-regression, since summer temperatures in one hemisphere are compensated by winter in the other. Therefore, an AR(n) model would generate results little different from a least-squares trend. Dr Stephen Farish, Professor of Epidemiological Statistics at the University of Melbourne, kindly verified the reliability of the algorithm that determines the trend on the graph and the correlation coefficient, which is very low because, though the data are highly variable, the trend is flat. RSS itself is now taking a serious interest in the length of the Great Pause. Dr Carl Mears, the senior research scientist at RSS, discusses it at remss.com/blog/recent-slowing-rise-global-temperatures. Dr Mears’ results are summarized in Fig. T1: Figure T1. Output of 33 IPCC models (turquoise) compared with measured RSS global temperature change (black), 1979-2014. The transient coolings caused by the volcanic eruptions of Chichón (1983) and Pinatubo (1991) are shown, as is the spike in warming caused by the great el Niño of 1998. Dr Mears writes: “The denialists like to assume that the cause for the model/observation discrepancy is some kind of problem with the fundamental model physics, and they pooh-pooh any other sort of explanation.  This leads them to conclude, very likely erroneously, that the long-term sensitivity of the climate is much less than is currently thought.” Dr Mears concedes the growing discrepancy between the RSS data and the models, but he alleges “cherry-picking” of the start-date for the global-temperature graph: “Recently, a number of articles in the mainstream press have pointed out that there appears to have been little or no change in globally averaged temperature over the last two decades.  Because of this, we are getting a lot of questions along the lines of ‘I saw this plot on a denialist web site.  Is this really your data?’  While some of these reports have ‘cherry-picked’ their end points to make their evidence seem even stronger, there is not much doubt that the rate of warming since the late 1990s is less than that predicted by most of the IPCC AR5 simulations of historical climate.  … The denialists really like to fit trends starting in 1997, so that the huge 1997-98 ENSO event is at the start of their time series, resulting in a linear fit with the smallest possible slope.” In fact, the spike in temperatures caused by the Great el Niño of 1998 is almost entirely offset in the linear-trend calculation by two factors: the not dissimilar spike of the 2010 el Niño, and the sheer length of the Great Pause itself. The headline graph in these monthly reports begins in 1997 because that is as far back as one can go in the data and still obtain a zero trend. Fig. T1a. Graphs for RSS and GISS temperatures starting both in 1997 and in 2001. For each dataset the trend-lines are near-identical, showing conclusively that the argument that the Pause was caused by the 1998 el Nino is false (Werner Brozek and Professor Brown worked out this neat demonstration). Curiously, Dr Mears prefers the terrestrial datasets to the satellite datasets. The UK Met Office, however, uses the satellite data to calibrate its own terrestrial record. The length of the Pause, significant though it now is, is of less importance than the ever-growing discrepancy between the temperature trends predicted by models and the far less exciting real-world temperature change that has been observed. Sources of the IPCC projections in Figs. 2 and 3 IPCC’s First Assessment Report predicted that global temperature would rise by 1.0 [0.7, 1.5] Cº to 2025, equivalent to 2.8 [1.9, 4.2] Cº per century. The executive summary asked, “How much confidence do we have in our predictions?” IPCC pointed out some uncertainties (clouds, oceans, etc.), but concluded: “Nevertheless, … we have substantial confidence that models can predict at least the broad-scale features of climate change. … There are similarities between results from the coupled models using simple representations of the ocean and those using more sophisticated descriptions, and our understanding of such differences as do occur gives us some confidence in the results.” That “substantial confidence” was substantial over-confidence. For the rate of global warming since 1990 – the most important of the “broad-scale features of climate change” that the models were supposed to predict – is now below half what the IPCC had then predicted. In 1990, the IPCC said this: “Based on current models we predict: “under the IPCC Business-as-Usual (Scenario A) emissions of greenhouse gases, a rate of increase of global mean temperature during the next century of about 0.3 Cº per decade (with an uncertainty range of 0.2 Cº to 0.5 Cº per decade), this is greater than that seen over the past 10,000 years. This will result in a likely increase in global mean temperature of about 1 Cº above the present value by 2025 and 3 Cº before the end of the next century. The rise will not be steady because of the influence of other factors” (p. xii). Later, the IPCC said: “The numbers given below are based on high-resolution models, scaled to be consistent with our best estimate of global mean warming of 1.8 Cº by 2030. For values consistent with other estimates of global temperature rise, the numbers below should be reduced by 30% for the low estimate or increased by 50% for the high estimate” (p. xxiv). The orange region in Fig. 2 represents the IPCC’s medium-term Scenario-A estimate of near-term warming, i.e. 1.0 [0.7, 1.5] K by 2025. The IPCC’s predicted global warming over the 25 years from 1990 to the present differs little from a straight line (Fig. T2). Figure T2. Historical warming from 1850-1990, and predicted warming from 1990-2100 on the IPCC’s “business-as-usual” Scenario A (IPCC, 1990, p. xxii). Because this difference between a straight line and the slight uptick in the warming rate the IPCC predicted over the period 1990-2025 is so small, one can look at it another way. To reach the 1 K central estimate of warming since 1990 by 2025, there would have to be twice as much warming in the next ten years as there was in the last 25 years. That is not likely. But is the Pause perhaps caused by the fact that CO2 emissions have not been rising anything like as fast as the IPCC’s “business-as-usual” Scenario A prediction in 1990? No: CO2 emissions have risen rather above the Scenario-A prediction (Fig. T3). Figure T3. CO2 emissions from fossil fuels, etc., in 2012, from Le Quéré et al. (2014), plotted against the chart of “man-made carbon dioxide emissions”, in billions of tonnes of carbon per year, from IPCC (1990). Plainly, therefore, CO2 emissions since 1990 have proven to be closer to Scenario A than to any other case, because for all the talk about CO2 emissions reduction the fact is that the rate of expansion of fossil-fuel burning in China, India, Indonesia, Brazil, etc., far outstrips the paltry reductions we have achieved in the West to date. True, methane concentration has not risen as predicted in 1990 (Fig. T4), for methane emissions, though largely uncontrolled, are simply not rising as the models had predicted. Here, too, all of the predictions were extravagantly baseless. The overall picture is clear. Scenario A is the emissions scenario from 1990 that is closest to the observed CO2 emissions outturn. Figure T4. Methane concentration as predicted in four IPCC Assessment Reports, together with (in black) the observed outturn, which is running along the bottom of the least prediction. This graph appeared in the pre-final draft of IPCC (2013), but had mysteriously been deleted from the final, published version, inferentially because the IPCC did not want to display such a plain comparison between absurdly exaggerated predictions and unexciting reality. To be precise, a quarter-century after 1990, the global-warming outturn to date – expressed as the least-squares linear-regression trend on the mean of the RSS and UAH monthly global mean surface temperature anomalies – is 0.27 Cº, equivalent to little more than 1 Cº/century. The IPCC’s central estimate of 0.71 Cº, equivalent to 2.8 Cº/century, that was predicted for Scenario A in IPCC (1990) with “substantial confidence” was approaching three times too big. In fact, the outturn is visibly well below even the least estimate. In 1990, the IPCC’s central prediction of the near-term warming rate was higher by two-thirds than its prediction is today. Then it was 2.8 C/century equivalent. Now it is just 1.7 Cº equivalent – and, as Fig. T5 shows, even that is proving to be a substantial exaggeration. Is the ocean warming? One frequently-discussed explanation for the Great Pause is that the coupled ocean-atmosphere system has continued to accumulate heat at approximately the rate predicted by the models, but that in recent decades the heat has been removed from the atmosphere by the ocean and, since globally the near-surface strata show far less warming than the models had predicted, it is hypothesized that what is called the “missing heat” has traveled to the little-measured abyssal strata below 2000 m, whence it may emerge at some future date. Actually, it is not known whether the ocean is warming: each of the 3600 automated ARGO bathythermograph buoys takes just three measurements a month in 200,000 cubic kilometres of ocean – roughly a 100,000-square-mile box more than 316 km square and 2 km deep. Plainly, the results on the basis of a resolution that sparse (which, as Willis Eschenbach puts it, is approximately the equivalent of trying to take a single temperature and salinity profile taken at a single point in Lake Superior less than once a year) are not going to be a lot better than guesswork. Unfortunately ARGO seems not to have updated the ocean dataset since December 2014. However, what we have gives us 11 full years of data. Results are plotted in Fig. T5. The ocean warming, if ARGO is right, is equivalent to just 0.02 Cº decade–1, equivalent to 0.2 Cº century–1. Figure T5. The entire near-global ARGO 2 km ocean temperature dataset from January 2004 to December 2014 (black spline-curve), with the least-squares linear-regression trend calculated from the data by the author (green arrow). Finally, though the ARGO buoys measure ocean temperature change directly, before publication NOAA craftily converts the temperature change into zettajoules of ocean heat content change, which make the change seem a whole lot larger. The terrifying-sounding heat content change of 260 ZJ from 1970 to 2014 (Fig. T6) is equivalent to just 0.2 K/century of global warming. All those “Hiroshima bombs of heat” of which the climate-extremist websites speak are a barely discernible pinprick. The ocean and its heat capacity are a lot bigger than some may realize. Figure T6. Ocean heat content change, 1957-2013, in Zettajoules from NOAA’s NODC Ocean Climate Lab: http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT, with the heat content values converted back to the ocean temperature changes in Kelvin that were originally measured. NOAA’s conversion of the minuscule warming data to Zettajoules, combined with the exaggerated vertical aspect of the graph, has the effect of making a very small change in ocean temperature seem considerably more significant than it is. Converting the ocean heat content change back to temperature change reveals an interesting discrepancy between NOAA’s data and that of the ARGO system. Over the period of ARGO data, from 2004-2014, the NOAA data imply that the oceans are warming at 0.05 Cº decade–1, equivalent to 0.5 Cº century–1, or rather more than double the rate shown by ARGO. ARGO has the better-resolved dataset, but since the resolutions of all ocean datasets are very low one should treat all these results with caution. What one can say is that, on such evidence as these datasets are capable of providing, the difference between underlying warming rate of the ocean and that of the atmosphere is not statistically significant, suggesting that if the “missing heat” is hiding in the oceans it has magically found its way into the abyssal strata without managing to warm the upper strata on the way. On these data, too, there is no evidence of rapid or catastrophic ocean warming. Furthermore, to date no empirical, theoretical or numerical method, complex or simple, has yet successfully specified mechanistically either how the heat generated by anthropogenic greenhouse-gas enrichment of the atmosphere has reached the deep ocean without much altering the heat content of the intervening near-surface strata or how the heat from the bottom of the ocean may eventually re-emerge to perturb the near-surface climate conditions relevant to land-based life on Earth. Figure T7. Near-global ocean temperatures by stratum, 0-1900 m, providing a visual reality check to show just how little the upper strata are affected by minor changes in global air surface temperature. Source: ARGO marine atlas. Most ocean models used in performing coupled general-circulation model sensitivity runs simply cannot resolve most of the physical processes relevant for capturing heat uptake by the deep ocean. Ultimately, the second law of thermodynamics requires that any heat which may have accumulated in the deep ocean will dissipate via various diffusive processes. It is not plausible that any heat taken up by the deep ocean will suddenly warm the upper ocean and, via the upper ocean, the atmosphere. If the “deep heat” explanation for the Pause were correct (and it is merely one among dozens that have been offered), the complex models have failed to account for it correctly: otherwise, the growing discrepancy between the predicted and observed atmospheric warming rates would not have become as significant as it has. In early October 2015 Steven Goddard added some very interesting graphs to his website. The graphs show the extent to which sea levels have been tampered with to make it look as though there has been sea-level rise when it is arguable that in fact there has been little or none. Why were the models’ predictions exaggerated? In 1990 the IPCC predicted – on its business-as-usual Scenario A – that from the Industrial Revolution till the present there would have been 4 Watts per square meter of radiative forcing caused by Man (Fig. T8): Figure T8. Predicted manmade radiative forcings (IPCC, 1990). However, from 1995 onward the IPCC decided to assume, on rather slender evidence, that anthropogenic particulate aerosols – mostly soot from combustion – were shading the Earth from the Sun to a large enough extent to cause a strong negative forcing. It has also now belatedly realized that its projected increases in methane concentration were wild exaggerations. As a result of these and other changes, it now estimates that the net anthropogenic forcing of the industrial era is just 2.3 Watts per square meter, or little more than half its prediction in 1990 (Fig. T9): Figure T9: Net anthropogenic forcings, 1750 to 1950, 1980 and 2012 (IPCC, 2013). Even this, however, may be a considerable exaggeration. For the best estimate of the actual current top-of-atmosphere radiative imbalance (total natural and anthropo-genic net forcing) is only 0.6 Watts per square meter (Fig. T10): Figure T10. Energy budget diagram for the Earth from Stephens et al. (2012) In short, most of the forcing predicted by the IPCC is either an exaggeration or has already resulted in whatever temperature change it was going to cause. There is little global warming in the pipeline as a result of our past and present sins of emission. It is also possible that the IPCC and the models have relentlessly exaggerated climate sensitivity. One recent paper on this question is Monckton of Brenchley et al. (2015), which found climate sensitivity to be in the region of 1 Cº per CO2 doubling (go to scibull.com and click “Most Read Articles”). The paper identified errors in the models’ treatment of temperature feedbacks and their amplification, which account for two-thirds of the equilibrium warming predicted by the IPCC. Professor Ray Bates gave a paper in Moscow in summer 2015 in which he concluded, based on the analysis by Lindzen & Choi (2009, 2011) (Fig. T10), that temperature feedbacks are net-negative. Accordingly, he supports the conclusion both by Lindzen & Choi (1990) (Fig. T11) and by Spencer & Braswell (2010, 2011) that climate sensitivity is below – and perhaps considerably below – 1 Cº per CO2 doubling. Figure T11. Reality (center) vs. 11 models. From Lindzen & Choi (2009). A growing body of reviewed papers find climate sensitivity considerably below the 3 [1.5, 4.5] Cº per CO2 doubling that was first put forward in the Charney Report of 1979 for the U.S. National Academy of Sciences, and is still the IPCC’s best estimate today. On the evidence to date, therefore, there is no scientific basis for taking any action at all to mitigate CO2 emissions. Finally, how long will it be before the Freedom Clock (Fig. T12) reaches 20 years without any global warming? If it does, the climate scare will become unsustainable. Figure T12. The Freedom Clock edges ever closer to 20 years without global warming

A new record ‘Pause’ length: Satellite Data: No global warming for 18 years 8 months!

Special To Climate Depot The Pause lengthens yet again A new record Pause length: no warming for 18 years 8 months By Christopher Monckton of Brenchley One-third of Man’s entire influence on climate since the Industrial Revolution has occurred since January 1997. Yet for 224 months since then there has been no global warming at all (Fig. 1). With this month’s RSS (Remote Sensing Systems) temperature record, the Pause sets a new record at 18 years 8 months. Figure 1. The least-squares linear-regression trend on the RSS satellite monthly global mean surface temperature anomaly dataset shows no global warming for 18 years 8 months since January 1997, though one-third of all anthropogenic forcings occurred during the period of the Pause. As ever, a warning about the current el Niño. It is becoming ever more likely that the temperature increase that usually accompanies an el Niño will begin to shorten the Pause somewhat, just in time for the Paris climate summit, though a subsequent La Niña would be likely to bring about a resumption and perhaps even a lengthening of the Pause. The spike in global temperatures caused by the thermohaline circulation carrying the warmer waters from the tropical Pacific all around the world usually occurs in the northern-hemisphere winter during an el Niño year. However, the year or two after an el Niño usually – but not always – brings an offsetting la Niña, cooling first the ocean surface and then the air temperature and restoring global temperature to normal. Figure 1a. The sea surface temperature index for the Nino 3.4 region of the tropical eastern Pacific, showing the climb towards a peak that generally occurs in the northern-hemisphere winter. For now, the Pause continues to lengthen, but before long the warmer sea surface temperatures in the Pacific will be carried around the world by the thermohaline circulation, causing a temporary warming spike in global temperatures. The hiatus period of 18 years 8 months is the farthest back one can go in the RSS satellite temperature record and still show a sub-zero trend. The start date is not cherry-picked: it is calculated. And the graph does not mean there is no such thing as global warming. Going back further shows a small warming rate. The UAH dataset shows a Pause almost as long as the RSS dataset. However, the much-altered surface tamperature datasets show a small warming rate (Fig. 1b). Figure 1b. The least-squares linear-regression trend on the mean of the GISS, HadCRUT4 and NCDC terrestrial monthly global mean surface temperature anomaly datasets shows global warming at a rate equivalent to a little over 1 C° per century during the period of the Pause from January 1997 to July 2015. Bearing in mind that one-third of the 2.4 W m–2 radiative forcing from all manmade sources since 1750 has occurred during the period of the Pause, a warming rate equivalent to little more than 1 C°/century is not exactly alarming. However, the paper that reported the supposed absence of the Pause was extremely careful not to report just how little warming the terrestrial datasets – even after all their many tamperings – actually show. As always, a note of caution. Merely because there has been little or no warming in recent decades, one may not draw the conclusion that warming has ended forever. The trend lines measure what has occurred: they do not predict what will occur. The Pause – politically useful though it may be to all who wish that the “official” scientific community would remember its duty of skepticism – is far less important than the growing discrepancy between the predictions of the general-circulation models and observed reality. The divergence between the models’ predictions in 1990 (Fig. 2) and 2005 (Fig. 3), on the one hand, and the observed outturn, on the other, continues to widen. If the Pause lengthens just a little more, the rate of warming in the quarter-century since the IPCC’s First Assessment Report in 1990 will fall below 1 C°/century equivalent. Figure 2. Near-term projections of warming at a rate equivalent to 2.8 [1.9, 4.2] K/century, made with “substantial confidence” in IPCC (1990), for the 307 months January 1990 to July 2015 (orange region and red trend line), vs. observed anomalies (dark blue) and trend (bright blue) at just 1 K/century equivalent, taken as the mean of the RSS and UAH v. 5.6 satellite monthly mean lower-troposphere temperature anomalies. Figure 3. Predicted temperature change, January 2005 to July 2015, at a rate equivalent to 1.7 [1.0, 2.3] Cº/century (orange zone with thick red best-estimate trend line), compared with the near-zero observed anomalies (dark blue) and real-world trend (bright blue), taken as the mean of the RSS and UAH v. 5.6 satellite lower-troposphere temperature anomalies. The page Key Facts about Global Temperature (below) should be shown to anyone who persists in believing that, in the words of Mr Obama’s Twitteratus, “global warming is real, manmade and dangerous”. The Technical Note explains the sources of the IPCC’s predictions in 1990 and in 2005, and also demonstrates that that according to the ARGO bathythermograph data the oceans are warming at a rate equivalent to less than a quarter of a Celsius degree per century. Key facts about global temperature The RSS satellite dataset shows no global warming at all for 224 months from January 1997 to August 2015 – more than half the 440-month satellite record. There has been no warming even though one-third of all anthropogenic forcings since 1750 have occurred since the Pause began in January 1997. The entire RSS dataset from January 1979 to date shows global warming at an unalarming rate equivalent to just 1.2 Cº per century. Since 1950, when a human influence on global temperature first became theoretically possible, the global warming trend has been equivalent to below 1.2 Cº per century. The global warming trend since 1900 is equivalent to 0.75 Cº per century. This is well within natural variability and may not have much to do with us. The fastest warming rate lasting 15 years or more since 1950 occurred over the 33 years from 1974 to 2006. It was equivalent to 2.0 Cº per century. Compare the warming on the Central England temperature dataset in the 40 years 1694-1733, well before the Industrial Revolution, equivalent to 4.33 C°/century. In 1990, the IPCC’s mid-range prediction of near-term warming was equivalent to 2.8 Cº per century, higher by two-thirds than its current prediction of 1.7 Cº/century. The warming trend since 1990, when the IPCC wrote its first report, is equivalent to 1 Cº per century. The IPCC had predicted close to thrice as much. To meet the IPCC’s central prediction of 1 C° warming from 1990-2025, in the next decade a warming of 0.75 C°, equivalent to 7.5 C°/century, would have to occur. Though the IPCC has cut its near-term warming prediction, it has not cut its high-end business as usual centennial warming prediction of 4.8 Cº warming to 2100. The IPCC’s predicted 4.8 Cº warming by 2100 is well over twice the greatest rate of warming lasting more than 15 years that has been measured since 1950. The IPCC’s 4.8 Cº-by-2100 prediction is four times the observed real-world warming trend since we might in theory have begun influencing it in 1950. The oceans, according to the 3600+ ARGO buoys, are warming at a rate of just 0.02 Cº per decade, equivalent to 0.23 Cº per century, or 1 C° in 430 years. Recent extreme-weather events cannot be blamed on global warming, because there has not been any global warming to speak of. It is as simple as that.   Technical note Our latest topical graph shows the least-squares linear-regression trend on the RSS satellite monthly global mean lower-troposphere dataset for as far back as it is possible to go and still find a zero trend. The start-date is not “cherry-picked” so as to coincide with the temperature spike caused by the 1998 el Niño. Instead, it is calculated so as to find the longest period with a zero trend. The fact of a long Pause is an indication of the widening discrepancy between prediction and reality in the temperature record. The satellite datasets are arguably less unreliable than other datasets in that they show the 1998 Great El Niño more clearly than all other datasets. The Great el Niño, like its two predecessors in the past 300 years, caused widespread global coral bleaching, providing an independent verification that the satellite datasets are better able than the rest to capture such fluctuations without artificially filtering them out. Terrestrial temperatures are measured by thermometers. Thermometers correctly sited in rural areas away from manmade heat sources show warming rates below those that are published. The satellite datasets are based on reference measurements made by the most accurate thermometers available – platinum resistance thermometers, which provide an independent verification of the temperature measurements by checking via spaceward mirrors the known temperature of the cosmic background radiation, which is 1% of the freezing point of water, or just 2.73 degrees above absolute zero. It was by measuring minuscule variations in the cosmic background radiation that the NASA anisotropy probe determined the age of the Universe: 13.82 billion years. The RSS graph (Fig. 1) is accurate. The data are lifted monthly straight from the RSS website. A computer algorithm reads them down from the text file and plots them automatically using an advanced routine that automatically adjusts the aspect ratio of the data window at both axes so as to show the data at maximum scale, for clarity. The latest monthly data point is visually inspected to ensure that it has been correctly positioned. The light blue trend line plotted across the dark blue spline-curve that shows the actual data is determined by the method of least-squares linear regression, which calculates the y-intercept and slope of the line. The IPCC and most other agencies use linear regression to determine global temperature trends. Professor Phil Jones of the University of East Anglia recommends it in one of the Climategate emails. The method is appropriate because global temperature records exhibit little auto-regression, since summer temperatures in one hemisphere are compensated by winter in the other. Therefore, an AR(n) model would generate results little different from a least-squares trend. Dr Stephen Farish, Professor of Epidemiological Statistics at the University of Melbourne, kindly verified the reliability of the algorithm that determines the trend on the graph and the correlation coefficient, which is very low because, though the data are highly variable, the trend is flat. RSS itself is now taking a serious interest in the length of the Great Pause. Dr Carl Mears, the senior research scientist at RSS, discusses it at remss.com/blog/recent-slowing-rise-global-temperatures. Dr Mears’ results are summarized in Fig. T1: Figure T1. Output of 33 IPCC models (turquoise) compared with measured RSS global temperature change (black), 1979-2014. The transient coolings caused by the volcanic eruptions of Chichón (1983) and Pinatubo (1991) are shown, as is the spike in warming caused by the great el Niño of 1998. Dr Mears writes: “The denialists like to assume that the cause for the model/observation discrepancy is some kind of problem with the fundamental model physics, and they pooh-pooh any other sort of explanation.  This leads them to conclude, very likely erroneously, that the long-term sensitivity of the climate is much less than is currently thought.” Dr Mears concedes the growing discrepancy between the RSS data and the models, but he alleges “cherry-picking” of the start-date for the global-temperature graph: “Recently, a number of articles in the mainstream press have pointed out that there appears to have been little or no change in globally averaged temperature over the last two decades.  Because of this, we are getting a lot of questions along the lines of ‘I saw this plot on a denialist web site.  Is this really your data?’  While some of these reports have ‘cherry-picked’ their end points to make their evidence seem even stronger, there is not much doubt that the rate of warming since the late 1990s is less than that predicted by most of the IPCC AR5 simulations of historical climate.  … The denialists really like to fit trends starting in 1997, so that the huge 1997-98 ENSO event is at the start of their time series, resulting in a linear fit with the smallest possible slope.” In fact, the spike in temperatures caused by the Great el Niño of 1998 is almost entirely offset in the linear-trend calculation by two factors: the not dissimilar spike of the 2010 el Niño, and the sheer length of the Great Pause itself. The headline graph in these monthly reports begins in 1997 because that is as far back as one can go in the data and still obtain a zero trend. Curiously, Dr Mears prefers the terrestrial datasets to the satellite datasets. The UK Met Office, however, uses the satellite data to calibrate its own terrestrial record. The length of the Great Pause in global warming, significant though it now is, is of less importance than the ever-growing discrepancy between the temperature trends predicted by models and the far less exciting real-world temperature change that has been observed. Sources of the IPCC projections in Figs. 2 and 3 IPCC’s First Assessment Report predicted that global temperature would rise by 1.0 [0.7, 1.5] Cº to 2025, equivalent to 2.8 [1.9, 4.2] Cº per century. The executive summary asked, “How much confidence do we have in our predictions?” IPCC pointed out some uncertainties (clouds, oceans, etc.), but concluded: “Nevertheless, … we have substantial confidence that models can predict at least the broad-scale features of climate change. … There are similarities between results from the coupled models using simple representations of the ocean and those using more sophisticated descriptions, and our understanding of such differences as do occur gives us some confidence in the results.” That “substantial confidence” was substantial over-confidence. For the rate of global warming since 1990 – the most important of the “broad-scale features of climate change” that the models were supposed to predict – is now below half what the IPCC had then predicted. In 1990, the IPCC said this: “Based on current models we predict: “under the IPCC Business-as-Usual (Scenario A) emissions of greenhouse gases, a rate of increase of global mean temperature during the next century of about 0.3 Cº per decade (with an uncertainty range of 0.2 Cº to 0.5 Cº per decade), this is greater than that seen over the past 10,000 years. This will result in a likely increase in global mean temperature of about 1 Cº above the present value by 2025 and 3 Cº before the end of the next century. The rise will not be steady because of the influence of other factors” (p. xii). Later, the IPCC said: “The numbers given below are based on high-resolution models, scaled to be consistent with our best estimate of global mean warming of 1.8 Cº by 2030. For values consistent with other estimates of global temperature rise, the numbers below should be reduced by 30% for the low estimate or increased by 50% for the high estimate” (p. xxiv). The orange region in Fig. 2 represents the IPCC’s medium-term Scenario-A estimate of near-term warming, i.e. 1.0 [0.7, 1.5] K by 2025. The IPCC’s predicted global warming over the 25 years from 1990 to the present differs little from a straight line (Fig. T2). Figure T2. Historical warming from 1850-1990, and predicted warming from 1990-2100 on the IPCC’s “business-as-usual” Scenario A (IPCC, 1990, p. xxii). Because this difference between a straight line and the slight uptick in the warming rate the IPCC predicted over the period 1990-2025 is so small, one can look at it another way. To reach the 1 K central estimate of warming since 1990 by 2025, there would have to be twice as much warming in the next ten years as there was in the last 25 years. That is not likely. But is the Pause perhaps caused by the fact that CO2 emissions have not been rising anything like as fast as the IPCC’s “business-as-usual” Scenario A prediction in 1990? No: CO2 emissions have risen rather above the Scenario-A prediction (Fig. T3). Figure T3. CO2 emissions from fossil fuels, etc., in 2012, from Le Quéré et al. (2014), plotted against the chart of “man-made carbon dioxide emissions”, in billions of tonnes of carbon per year, from IPCC (1990). Plainly, therefore, CO2 emissions since 1990 have proven to be closer to Scenario A than to any other case, because for all the talk about CO2 emissions reduction the fact is that the rate of expansion of fossil-fuel burning in China, India, Indonesia, Brazil, etc., far outstrips the paltry reductions we have achieved in the West to date. True, methane concentration has not risen as predicted in 1990 (Fig. T4), for methane emissions, though largely uncontrolled, are simply not rising as the models had predicted. Here, too, all of the predictions were extravagantly baseless. The overall picture is clear. Scenario A is the emissions scenario from 1990 that is closest to the observed CO2 emissions outturn. Figure T4. Methane concentration as predicted in four IPCC Assessment Reports, together with (in black) the observed outturn, which is running along the bottom of the least prediction. This graph appeared in the pre-final draft of IPCC (2013), but had mysteriously been deleted from the final, published version, inferentially because the IPCC did not want to display such a plain comparison between absurdly exaggerated predictions and unexciting reality. To be precise, a quarter-century after 1990, the global-warming outturn to date – expressed as the least-squares linear-regression trend on the mean of the RSS and UAH monthly global mean surface temperature anomalies – is 0.27 Cº, equivalent to little more than 1 Cº/century. The IPCC’s central estimate of 0.71 Cº, equivalent to 2.8 Cº/century, that was predicted for Scenario A in IPCC (1990) with “substantial confidence” was approaching three times too big. In fact, the outturn is visibly well below even the least estimate. In 1990, the IPCC’s central prediction of the near-term warming rate was higher by two-thirds than its prediction is today. Then it was 2.8 C/century equivalent. Now it is just 1.7 Cº equivalent – and, as Fig. T5 shows, even that is proving to be a substantial exaggeration. Is the ocean warming? One frequently-discussed explanation for the Great Pause is that the coupled ocean-atmosphere system has continued to accumulate heat at approximately the rate predicted by the models, but that in recent decades the heat has been removed from the atmosphere by the ocean and, since globally the near-surface strata show far less warming than the models had predicted, it is hypothesized that what is called the “missing heat” has traveled to the little-measured abyssal strata below 2000 m, whence it may emerge at some future date. Actually, it is not known whether the ocean is warming: each of the 3600 automated ARGO bathythermograph buoys takes just three measurements a month in 200,000 cubic kilometres of ocean – roughly a 100,000-square-mile box more than 316 km square and 2 km deep. Plainly, the results on the basis of a resolution that sparse (which, as Willis Eschenbach puts it, is approximately the equivalent of trying to take a single temperature and salinity profile taken at a single point in Lake Superior less than once a year) are not going to be a lot better than guesswork. Unfortunately ARGO seems not to have updated the ocean dataset since December 2014. However, what we have gives us 11 full years of data. Results are plotted in Fig. T5. The ocean warming, if ARGO is right, is equivalent to just 0.02 Cº decade–1, equivalent to 0.2 Cº century–1. Figure T5. The entire near-global ARGO 2 km ocean temperature dataset from January 2004 to December 2014 (black spline-curve), with the least-squares linear-regression trend calculated from the data by the author (green arrow). Finally, though the ARGO buoys measure ocean temperature change directly, before publication NOAA craftily converts the temperature change into zettajoules of ocean heat content change, which make the change seem a whole lot larger. The terrifying-sounding heat content change of 260 ZJ from 1970 to 2014 (Fig. T6) is equivalent to just 0.2 K/century of global warming. All those “Hiroshima bombs of heat” of which the climate-extremist websites speak are a barely discernible pinprick. The ocean and its heat capacity are a lot bigger than some may realize. Figure T6. Ocean heat content change, 1957-2013, in Zettajoules from NOAA’s NODC Ocean Climate Lab: http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT, with the heat content values converted back to the ocean temperature changes in Kelvin that were originally measured. NOAA’s conversion of the minuscule warming data to Zettajoules, combined with the exaggerated vertical aspect of the graph, has the effect of making a very small change in ocean temperature seem considerably more significant than it is. Converting the ocean heat content change back to temperature change reveals an interesting discrepancy between NOAA’s data and that of the ARGO system. Over the period of ARGO data, from 2004-2014, the NOAA data imply that the oceans are warming at 0.05 Cº decade–1, equivalent to 0.5 Cº century–1, or rather more than double the rate shown by ARGO. ARGO has the better-resolved dataset, but since the resolutions of all ocean datasets are very low one should treat all these results with caution. What one can say is that, on such evidence as these datasets are capable of providing, the difference between underlying warming rate of the ocean and that of the atmosphere is not statistically significant, suggesting that if the “missing heat” is hiding in the oceans it has magically found its way into the abyssal strata without managing to warm the upper strata on the way. On these data, too, there is no evidence of rapid or catastrophic ocean warming. Furthermore, to date no empirical, theoretical or numerical method, complex or simple, has yet successfully specified mechanistically either how the heat generated by anthropogenic greenhouse-gas enrichment of the atmosphere has reached the deep ocean without much altering the heat content of the intervening near-surface strata or how the heat from the bottom of the ocean may eventually re-emerge to perturb the near-surface climate conditions relevant to land-based life on Earth. Figure T7. Near-global ocean temperatures by stratum, 0-1900 m, providing a visual reality check to show just how little the upper strata are affected by minor changes in global air surface temperature. Source: ARGO marine atlas. Most ocean models used in performing coupled general-circulation model sensitivity runs simply cannot resolve most of the physical processes relevant for capturing heat uptake by the deep ocean. Ultimately, the second law of thermodynamics requires that any heat which may have accumulated in the deep ocean will dissipate via various diffusive processes. It is not plausible that any heat taken up by the deep ocean will suddenly warm the upper ocean and, via the upper ocean, the atmosphere. If the “deep heat” explanation for the Pause were correct (and it is merely one among dozens that have been offered), the complex models have failed to account for it correctly: otherwise, the growing discrepancy between the predicted and observed atmospheric warming rates would not have become as significant as it has. Why were the models’ predictions exaggerated? In 1990 the IPCC predicted – on its business-as-usual Scenario A – that from the Industrial Revolution till the present there would have been 4 Watts per square meter of radiative forcing caused by Man (Fig. T8): Figure T8. Predicted manmade radiative forcings (IPCC, 1990). However, from 1995 onward the IPCC decided to assume, on rather slender evidence, that anthropogenic particulate aerosols – mostly soot from combustion – were shading the Earth from the Sun to a large enough extent to cause a strong negative forcing. It has also now belatedly realized that its projected increases in methane concentration were wild exaggerations. As a result of these and other changes, it now estimates that the net anthropogenic forcing of the industrial era is just 2.3 Watts per square meter, or little more than half its prediction in 1990 (Fig. T9): Figure T9: Net anthropogenic forcings, 1750 to 1950, 1980 and 2012 (IPCC, 2013). Even this, however, may be a considerable exaggeration. For the best estimate of the actual current top-of-atmosphere radiative imbalance (total natural and anthropo-genic net forcing) is only 0.6 Watts per square meter (Fig. T10): Figure T10. Energy budget diagram for the Earth from Stephens et al. (2012) In short, most of the forcing predicted by the IPCC is either an exaggeration or has already resulted in whatever temperature change it was going to cause. There is little global warming in the pipeline as a result of our past and present sins of emission. It is also possible that the IPCC and the models have relentlessly exaggerated climate sensitivity. One recent paper on this question is Monckton of Brenchley et al. (2015), which found climate sensitivity to be in the region of 1 Cº per CO2 doubling (go to scibull.com and click “Most Read Articles”). The paper identified errors in the models’ treatment of temperature feedbacks and their amplification, which account for two-thirds of the equilibrium warming predicted by the IPCC. Professor Ray Bates gave a paper in Moscow in summer 2015 in which he concluded, based on the analysis by Lindzen & Choi (2009, 2011) (Fig. T10), that temperature feedbacks are net-negative. Accordingly, he supports the conclusion both by Lindzen & Choi (1990) (Fig. T11) and by Spencer & Braswell (2010, 2011) that climate sensitivity is below – and perhaps considerably below – 1 Cº per CO2 doubling. Figure T11. Reality (center) vs. 11 models. From Lindzen & Choi (2009). A growing body of reviewed papers find climate sensitivity considerably below the 3 [1.5, 4.5] Cº per CO2 doubling that was first put forward in the Charney Report of 1979 for the U.S. National Academy of Sciences, and is still the IPCC’s best estimate today. On the evidence to date, therefore, there is no scientific basis for taking any action at all to mitigate CO2 emissions. Finally, how long will it be before the Freedom Clock (Fig. T11) reaches 20 years without any global warming? If it does, the climate scare will become unsustainable. Figure T12. The Freedom Clock edges ever closer to 20 years without global warming Related Links: It’s Official – There are now 66 excuses for Temp ‘pause’ – Updated list of 66 excuses for the 18-26 year ‘pause’ in global warming) Update: Scientists Challenge New Study Attempting to Erase The ‘Pause’: Warmists Rewrite Temperature History To Eliminate the ‘Pause’ ‘Deceptive hottest year temperature record claims’ – ‘Inconvenient fact that these records are being set by less than the uncertainty in the statistics’

A new record ‘Pause’ length: No global warming for 18 years 7 months – Temperature standstill extends to 223 months

Special to Climate Depot The Pause draws blood A new record Pause length: no warming for 18 years 7 months By Christopher Monckton of Brenchley For 223 months, since January 1997, there has been no global warming at all (Fig. 1). This month’s RSS temperature shows the Pause setting a new record at 18 years 7 months. It is becoming ever more likely that the temperature increase that usually accompanies an el Niño will begin to shorten the Pause somewhat, just in time for the Paris climate summit, though a subsequent La Niña would be likely to bring about a resumption and perhaps even a lengthening of the Pause. Figure 1. The least-squares linear-regression trend on the RSS satellite monthly global mean surface temperature anomaly dataset shows no global warming for 18 years 7 months since January 1997. The hiatus period of 18 years 7 months is the farthest back one can go in the RSS satellite temperature record and still show a sub-zero trend. The start date is not cherry-picked: it is calculated. And the graph does not mean there is no such thing as global warming. Going back further shows a small warming rate. The Pause has now drawn blood. In the run-up to the world-government “climate” conference in Paris this December, the failure of the world to warm at all for well over half the satellite record has provoked the climate extremists to resort to desperate measures to try to do away with the Pause. First there was Tom Karl with his paper attempting to wipe out the Pause by arbitrarily adding a hefty increase to all the ocean temperature measurements made by the 3600 automated ARGO bathythermograph buoys circulating in the oceans.  Hey presto! All three of the longest-standing terrestrial temperature datasets – GISS, HadCRUT4 and NCDC – were duly adjusted, yet again, to show more global warming than has really occurred. However, the measured and recorded facts are these. In the 11 full years April 2004 to March 2015, for which the ARGO system has been providing reasonably-calibrated though inevitably ill-resolved data (each buoy has to represent 200,000 km3 of ocean temperature with only three readings a month), there has been no warming at all in the upper 750 m, and only a little below that, so that the trend over the period of operation shows a warming equivalent to just 1 C° every 430 years. Figure 1a. Near-global ocean temperatures by stratum, 0-1900 m. Source: ARGO marine atlas. And in the lower troposphere, the warming according to RSS occurred at a rate equivalent to 1 C° every 700 years. Figure 1b. The least-squares linear-regression trend on the UAH satellite monthly global mean surface temperature anomaly dataset shows no global warming for 18 years 5 months since March 1997. Then along came another paper, this time saying that the GISS global temperature record shows global warming during the Pause and that, therefore, GISS shows global warming during the Pause. This instance of argumentum ad petitionem principii, the fallacy of circular argument, passed peer review without difficulty because it came to the politically-correct conclusion that there was no Pause. The paper reached its conclusion, however, without mentioning the word “satellite”. The UAH data show no warming for 18 years 5 months. Figure 1c. The least-squares linear-regression trend on the UAH satellite monthly global mean surface temperature anomaly dataset shows no global warming for 18 years 5 months since March 1997. For completeness, though no reliance can now be placed on the terrestrial datasets, here is the “warming” rate they show since January 1997: Figure 1d. The least-squares linear-regression trend on the mean of the GISS, HadCRUT4 and NCDC terrestrial monthly global mean surface temperature anomaly datasets shows global warming at a rate equivalent to a little over 1 C° per century during the period of the Pause from January 1997 to July 2015. Bearing in mind that one-third of the 2.4 W m–2 radiative forcing from all manmade sources since 1750 has occurred during the period of the Pause, a warming rate equivalent to little more than 1 C°/century is not exactly alarming. However, the paper that reported the supposed absence of the Pause was extremely careful not to report just how little warming the terrestrial datasets – even after all their many tamperings – actually show. As always, a note of caution. Merely because there has been little or no warming in recent decades, one may not draw the conclusion that warming has ended forever. The trend lines measure what has occurred: they do not predict what will occur. Furthermore, the long, slow build-up of the current el Nino, which has now become strongish and – on past form – will not peak till the turn of the year, is already affecting tropical temperatures and, as the thermohaline circulation does its thing, must eventually affect global temperatures. Though one may expect the el Nino to be followed by a la Nina, canceling the temporary warming, this does not always happen. In short, the Pause may well come to an end and then disappear. However, as this regular column has stressed before, the Pause – politically useful though it may be to all who wish that the “official” scientific community would remember its duty of skepticism – is far less important than the growing divergence between the predictions of the general-circulation models and observed reality. The divergence between the models’ predictions in 1990 (Fig. 2) and 2005 (Fig. 3), on the one hand, and the observed outturn, on the other, continues to widen. If the Pause lengthens just a little more, the rate of warming in the quarter-century since the IPCC’s First Assessment Report in 1990 will fall below 1 C°/century equivalent. Figure 2. Near-term projections of warming at a rate equivalent to 2.8 [1.9, 4.2] K/century, made with “substantial confidence” in IPCC (1990), for the 307 months January 1990 to July 2015 (orange region and red trend line), vs. observed anomalies (dark blue) and trend (bright blue) at just 1 K/century equivalent, taken as the mean of the RSS and UAH v. 5.6 satellite monthly mean lower-troposphere temperature anomalies. Figure 3. Predicted temperature change, January 2005 to July 2015, at a rate equivalent to 1.7 [1.0, 2.3] Cº/century (orange zone with thick red best-estimate trend line), compared with the near-zero observed anomalies (dark blue) and real-world trend (bright blue), taken as the mean of the RSS and UAH v. 5.6 satellite lower-troposphere temperature anomalies. The page Key Facts about Global Temperature (below) should be shown to anyone who persists in believing that, in the words of Mr Obama’s Twitteratus, “global warming is real, manmade and dangerous”. The Technical Note explains the sources of the IPCC’s predictions in 1990 and in 2005, and also demonstrates that that according to the ARGO bathythermograph data the oceans are warming at a rate equivalent to less than a quarter of a Celsius degree per century. Key facts about global temperature The RSS satellite dataset shows no global warming at all for 223 months from January 1997 to July 2015 – more than half the 439-month satellite record. There has been no warming even though one-third of all anthropogenic forcings since 1750 have occurred since January 1997, during the pause in global warming. The entire RSS dataset from January 1979 to date shows global warming at an unalarming rate equivalent to just 1.2 Cº per century. Since 1950, when a human influence on global temperature first became theoretically possible, the global warming trend has been equivalent to below 1.2 Cº per century. The global warming trend since 1900 is equivalent to 0.75 Cº per century. This is well within natural variability and may not have much to do with us. The fastest warming rate lasting 15 years or more since 1950 occurred over the 33 years from 1974 to 2006. It was equivalent to 2.0 Cº per century. Compare the warming on the Central England temperature dataset in the 40 years 1694-1733, well before the Industrial Revolution, equivalent to 4.33 C°/century. In 1990, the IPCC’s mid-range prediction of near-term warming was equivalent to 2.8 Cº per century, higher by two-thirds than its current prediction of 1.7 Cº/century. The warming trend since 1990, when the IPCC wrote its first report, is equivalent to 1 Cº per century. The IPCC had predicted more than two and a half times as much. To meet the IPCC’s central prediction of 1 C° warming from 1990-2025, in the next decade a warming of 0.75 C°, equivalent to 7.5 C°/century, would have to occur. Though the IPCC has cut its near-term warming prediction, it has not cut its high-end business as usual centennial warming prediction of 4.8 Cº warming to 2100. The IPCC’s predicted 4.8 Cº warming by 2100 is well over twice the greatest rate of warming lasting more than 15 years that has been measured since 1950. The IPCC’s 4.8 Cº-by-2100 prediction is four times the observed real-world warming trend since we might in theory have begun influencing it in 1950. The oceans, according to the 3600+ ARGO buoys, are warming at a rate of just 0.02 Cº per decade, equivalent to 0.23 Cº per century, or 1 C° in 430 years. Recent extreme-weather events cannot be blamed on global warming, because there has not been any global warming to speak of. It is as simple as that.  Technical note Our latest topical graph shows the least-squares linear-regression trend on the RSS satellite monthly global mean lower-troposphere dataset for as far back as it is possible to go and still find a zero trend. The start-date is not “cherry-picked” so as to coincide with the temperature spike caused by the 1998 el Niño. Instead, it is calculated so as to find the longest period with a zero trend. The fact of a long Pause is an indication of the widening discrepancy between prediction and reality in the temperature record. The satellite datasets are arguably less unreliable than other datasets in that they show the 1998 Great El Niño more clearly than all other datasets. The Great el Niño, like its two predecessors in the past 300 years, caused widespread global coral bleaching, providing an independent verification that the satellite datasets are better able than the rest to capture such fluctuations without artificially filtering them out. Terrestrial temperatures are measured by thermometers. Thermometers correctly sited in rural areas away from manmade heat sources show warming rates below those that are published. The satellite datasets are based on reference measurements made by the most accurate thermometers available – platinum resistance thermometers, which provide an independent verification of the temperature measurements by checking via spaceward mirrors the known temperature of the cosmic background radiation, which is 1% of the freezing point of water, or just 2.73 degrees above absolute zero. It was by measuring minuscule variations in the cosmic background radiation that the NASA anisotropy probe determined the age of the Universe: 13.82 billion years. The RSS graph (Fig. 1) is accurate. The data are lifted monthly straight from the RSS website. A computer algorithm reads them down from the text file and plots them automatically using an advanced routine that automatically adjusts the aspect ratio of the data window at both axes so as to show the data at maximum scale, for clarity. The latest monthly data point is visually inspected to ensure that it has been correctly positioned. The light blue trend line plotted across the dark blue spline-curve that shows the actual data is determined by the method of least-squares linear regression, which calculates the y-intercept and slope of the line. The IPCC and most other agencies use linear regression to determine global temperature trends. Professor Phil Jones of the University of East Anglia recommends it in one of the Climategate emails. The method is appropriate because global temperature records exhibit little auto-regression, since summer temperatures in one hemisphere are compensated by winter in the other. Therefore, an AR(n) model would generate results little different from a least-squares trend. Dr Stephen Farish, Professor of Epidemiological Statistics at the University of Melbourne, kindly verified the reliability of the algorithm that determines the trend on the graph and the correlation coefficient, which is very low because, though the data are highly variable, the trend is flat. RSS itself is now taking a serious interest in the length of the Great Pause. Dr Carl Mears, the senior research scientist at RSS, discusses it at remss.com/blog/recent-slowing-rise-global-temperatures. Dr Mears’ results are summarized in Fig. T1: Figure T1. Output of 33 IPCC models (turquoise) compared with measured RSS global temperature change (black), 1979-2014. The transient coolings caused by the volcanic eruptions of Chichón (1983) and Pinatubo (1991) are shown, as is the spike in warming caused by the great el Niño of 1998. Dr Mears writes: “The denialists like to assume that the cause for the model/observation discrepancy is some kind of problem with the fundamental model physics, and they pooh-pooh any other sort of explanation.  This leads them to conclude, very likely erroneously, that the long-term sensitivity of the climate is much less than is currently thought.” Dr Mears concedes the growing discrepancy between the RSS data and the models, but he alleges “cherry-picking” of the start-date for the global-temperature graph: “Recently, a number of articles in the mainstream press have pointed out that there appears to have been little or no change in globally averaged temperature over the last two decades.  Because of this, we are getting a lot of questions along the lines of ‘I saw this plot on a denialist web site.  Is this really your data?’  While some of these reports have ‘cherry-picked’ their end points to make their evidence seem even stronger, there is not much doubt that the rate of warming since the late 1990s is less than that predicted by most of the IPCC AR5 simulations of historical climate.  … The denialists really like to fit trends starting in 1997, so that the huge 1997-98 ENSO event is at the start of their time series, resulting in a linear fit with the smallest possible slope.” In fact, the spike in temperatures caused by the Great el Niño of 1998 is almost entirely offset in the linear-trend calculation by two factors: the not dissimilar spike of the 2010 el Niño, and the sheer length of the Great Pause itself. Curiously, Dr Mears prefers the terrestrial datasets to the satellite datasets. The UK Met Office, however, uses the satellite data to calibrate its own terrestrial record. The length of the Great Pause in global warming, significant though it now is, is of less importance than the ever-growing discrepancy between the temperature trends predicted by models and the far less exciting real-world temperature change that has been observed. The el Nino may well strengthen throughout this year, reducing the length of the Great Pause. However, the discrepancy between prediction and observation continues to widen. Sources of the IPCC projections in Figs. 2 and 3 IPCC’s First Assessment Report predicted that global temperature would rise by 1.0 [0.7, 1.5] Cº to 2025, equivalent to 2.8 [1.9, 4.2] Cº per century. The executive summary asked, “How much confidence do we have in our predictions?” IPCC pointed out some uncertainties (clouds, oceans, etc.), but concluded: “Nevertheless, … we have substantial confidence that models can predict at least the broad-scale features of climate change. … There are similarities between results from the coupled models using simple representations of the ocean and those using more sophisticated descriptions, and our understanding of such differences as do occur gives us some confidence in the results.” That “substantial confidence” was substantial over-confidence. For the rate of global warming since 1990 – the most important of the “broad-scale features of climate change” that the models were supposed to predict – is now below half what the IPCC had then predicted. In 1990, the IPCC said this: “Based on current models we predict: “under the IPCC Business-as-Usual (Scenario A) emissions of greenhouse gases, a rate of increase of global mean temperature during the next century of about 0.3 Cº per decade (with an uncertainty range of 0.2 Cº to 0.5 Cº per decade), this is greater than that seen over the past 10,000 years. This will result in a likely increase in global mean temperature of about 1 Cº above the present value by 2025 and 3 Cº before the end of the next century. The rise will not be steady because of the influence of other factors” (p. xii). Later, the IPCC said: “The numbers given below are based on high-resolution models, scaled to be consistent with our best estimate of global mean warming of 1.8 Cº by 2030. For values consistent with other estimates of global temperature rise, the numbers below should be reduced by 30% for the low estimate or increased by 50% for the high estimate” (p. xxiv). The orange region in Fig. 2 represents the IPCC’s medium-term Scenario-A estimate of near-term warming, i.e. 1.0 [0.7, 1.5] K by 2025. The IPCC’s predicted global warming over the 25 years from 1990 to the present differs little from a straight line (Fig. T2). Figure T2. Historical warming from 1850-1990, and predicted warming from 1990-2100 on the IPCC’s “business-as-usual” Scenario A (IPCC, 1990, p. xxii). Because this difference between a straight line and the slight uptick in the warming rate the IPCC predicted over the period 1990-2025 is so small, one can look at it another way. To reach the 1 K central estimate of warming since 1990 by 2025, there would have to be twice as much warming in the next ten years as there was in the last 25 years. That is not likely. But is the Pause perhaps caused by the fact that CO2 emissions have not been rising anything like as fast as the IPCC’s “business-as-usual” Scenario A prediction in 1990? No: CO2 emissions have risen rather above the Scenario-A prediction (Fig. T3). Figure T3. CO2 emissions from fossil fuels, etc., in 2012, from Le Quéré et al. (2014), plotted against the chart of “man-made carbon dioxide emissions”, in billions of tonnes of carbon per year, from IPCC (1990). Plainly, therefore, CO2 emissions since 1990 have proven to be closer to Scenario A than to any other case, because for all the talk about CO2 emissions reduction the fact is that the rate of expansion of fossil-fuel burning in China, India, Indonesia, Brazil, etc., far outstrips the paltry reductions we have achieved in the West to date. True, methane concentration has not risen as predicted in 1990 (Fig. T4), for methane emissions, though largely uncontrolled, are simply not rising as the models had predicted. Here, too, all of the predictions were extravagantly baseless. The overall picture is clear. Scenario A is the emissions scenario from 1990 that is closest to the observed CO2 emissions outturn. Figure T4. Methane concentration as predicted in four IPCC Assessment Reports, together with (in black) the observed outturn, which is running along the bottom of the least prediction. This graph appeared in the pre-final draft of IPCC (2013), but had mysteriously been deleted from the final, published version, inferentially because the IPCC did not want to display such a plain comparison between absurdly exaggerated predictions and unexciting reality. To be precise, a quarter-century after 1990, the global-warming outturn to date – expressed as the least-squares linear-regression trend on the mean of the RSS and UAH monthly global mean surface temperature anomalies – is 0.27 Cº, equivalent to little more than 1 Cº/century. The IPCC’s central estimate of 0.71 Cº, equivalent to 2.8 Cº/century, that was predicted for Scenario A in IPCC (1990) with “substantial confidence” was approaching three times too big. In fact, the outturn is visibly well below even the least estimate. In 1990, the IPCC’s central prediction of the near-term warming rate was higher by two-thirds than its prediction is today. Then it was 2.8 C/century equivalent. Now it is just 1.7 Cº equivalent – and, as Fig. T5 shows, even that is proving to be a substantial exaggeration. Is the ocean warming? One frequently-discussed explanation for the Great Pause is that the coupled ocean-atmosphere system has continued to accumulate heat at approximately the rate predicted by the models, but that in recent decades the heat has been removed from the atmosphere by the ocean and, since globally the near-surface strata show far less warming than the models had predicted, it is hypothesized that what is called the “missing heat” has traveled to the little-measured abyssal strata below 2000 m, whence it may emerge at some future date. Actually, it is not known whether the ocean is warming: each of the 3600 automated ARGO bathythermograph buoys takes just three measurements a month in 200,000 cubic kilometres of ocean – roughly a 100,000-square-mile box more than 316 km square and 2 km deep. Plainly, the results on the basis of a resolution that sparse (which, as Willis Eschenbach puts it, is approximately the equivalent of trying to take a single temperature and salinity profile taken at a single point in Lake Superior less than once a year) are not going to be a lot better than guesswork. Unfortunately ARGO seems not to have updated the ocean dataset since December 2014. However, what we have gives us 11 full years of data. Results are plotted in Fig. T5. The ocean warming, if ARGO is right, is equivalent to just 0.02 Cº decade–1, equivalent to 0.2 Cº century–1. Figure T5. The entire near-global ARGO 2 km ocean temperature dataset from January 2004 to December 2014 (black spline-curve), with the least-squares linear-regression trend calculated from the data by the author (green arrow). Finally, though the ARGO buoys measure ocean temperature change directly, before publication NOAA craftily converts the temperature change into zettajoules of ocean heat content change, which make the change seem a whole lot larger. The terrifying-sounding heat content change of 260 ZJ from 1970 to 2014 (Fig. T6) is equivalent to just 0.2 K/century of global warming. All those “Hiroshima bombs of heat” of which the climate-extremist websites speak are a barely discernible pinprick. The ocean and its heat capacity are a lot bigger than some may realize. Figure T6. Ocean heat content change, 1957-2013, in Zettajoules from NOAA’s NODC Ocean Climate Lab: http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT, with the heat content values converted back to the ocean temperature changes in Kelvin that were originally measured. NOAA’s conversion of the minuscule warming data to Zettajoules, combined with the exaggerated vertical aspect of the graph, has the effect of making a very small change in ocean temperature seem considerably more significant than it is. Converting the ocean heat content change back to temperature change reveals an interesting discrepancy between NOAA’s data and that of the ARGO system. Over the period of ARGO data, from 2004-2014, the NOAA data imply that the oceans are warming at 0.05 Cº decade–1, equivalent to 0.5 Cº century–1, or rather more than double the rate shown by ARGO. ARGO has the better-resolved dataset, but since the resolutions of all ocean datasets are very low one should treat all these results with caution. What one can say is that, on such evidence as these datasets are capable of providing, the difference between underlying warming rate of the ocean and that of the atmosphere is not statistically significant, suggesting that if the “missing heat” is hiding in the oceans it has magically found its way into the abyssal strata without managing to warm the upper strata on the way. On these data, too, there is no evidence of rapid or catastrophic ocean warming. Furthermore, to date no empirical, theoretical or numerical method, complex or simple, has yet successfully specified mechanistically either how the heat generated by anthropogenic greenhouse-gas enrichment of the atmosphere has reached the deep ocean without much altering the heat content of the intervening near-surface strata or how the heat from the bottom of the ocean may eventually re-emerge to perturb the near-surface climate conditions relevant to land-based life on Earth. Most ocean models used in performing coupled general-circulation model sensitivity runs simply cannot resolve most of the physical processes relevant for capturing heat uptake by the deep ocean. Ultimately, the second law of thermodynamics requires that any heat which may have accumulated in the deep ocean will dissipate via various diffusive processes. It is not plausible that any heat taken up by the deep ocean will suddenly warm the upper ocean and, via the upper ocean, the atmosphere. If the “deep heat” explanation for the Pause were correct (and it is merely one among dozens that have been offered), the complex models have failed to account for it correctly: otherwise, the growing discrepancy between the predicted and observed atmospheric warming rates would not have become as significant as it has. Why were the models’ predictions exaggerated? In 1990 the IPCC predicted – on its business-as-usual Scenario A – that from the Industrial Revolution till the present there would have been 4 Watts per square meter of radiative forcing caused by Man (Fig. T7): Figure T7. Predicted manmade radiative forcings (IPCC, 1990). However, from 1995 onward the IPCC decided to assume, on rather slender evidence, that anthropogenic particulate aerosols – mostly soot from combustion – were shading the Earth from the Sun to a large enough extent to cause a strong negative forcing. It has also now belatedly realized that its projected increases in methane concentration were wild exaggerations. As a result of these and other changes, it now estimates that the net anthropogenic forcing of the industrial era is just 2.3 Watts per square meter, or little more than half its prediction in 1990: Figure T8: Net anthropogenic forcings, 1750 to 1950, 1980 and 2012 (IPCC, 2013). Even this, however, may be a considerable exaggeration. For the best estimate of the actual current top-of-atmosphere radiative imbalance (total natural and anthropo-genic net forcing) is only 0.6 Watts per square meter (Fig. T9): Figure T9. Energy budget diagram for the Earth from Stephens et al. (2012) In short, most of the forcing predicted by the IPCC is either an exaggeration or has already resulted in whatever temperature change it was going to cause. There is little global warming in the pipeline as a result of our past and present sins of emission. It is also possible that the IPCC and the models have relentlessly exaggerated climate sensitivity. One recent paper on this question is Monckton of Brenchley et al. (2015), which found climate sensitivity to be in the region of 1 Cº per CO2 doubling (go to scibull.com and click “Most Read Articles”). The paper identified errors in the models’ treatment of temperature feedbacks and their amplification, which account for two-thirds of the equilibrium warming predicted by the IPCC. Professor Ray Bates gave a paper in Moscow in summer 2015 in which he concluded, based on the analysis by Lindzen & Choi (2009, 2011) (Fig. T10), that temperature feedbacks are net-negative. Accordingly, he supports the conclusion both by Lindzen & Choi (1990) (Fig. T10) and by Spencer & Braswell (2010, 2011) that climate sensitivity is below – and perhaps considerably below – 1 Cº per CO2 doubling. Figure T10. Reality (center) vs. 11 models. From Lindzen & Choi (2009). A growing body of reviewed papers find climate sensitivity considerably below the 3 [1.5, 4.5] Cº per CO2 doubling that was first put forward in the Charney Report of 1979 for the U.S. National Academy of Sciences, and is still the IPCC’s best estimate today. On the evidence to date, therefore, there is no scientific basis for taking any action at all to mitigate CO2 emissions. Finally, how long will it be before the Freedom Clock (Fig. T11) reaches 20 years without any global warming? If it does, the climate scare will become unsustainable. Figure T11. The Freedom Clock # Climate Depot Note: Above Graphs courtesy of WattsUpWithThat.com (Also see: It’s Official – There are now 66 excuses for Temp ‘pause’ – Updated list of 66 excuses for the 18-26 year ‘pause’ in global warming) Update: Scientists Challenge New Study Attempting to Erase The ‘Pause’: Warmists Rewrite Temperature History To Eliminate the ‘Pause’

Global Warming: ‘The Theory that Predicts Nothing and Explains Everything’

http://thefederalist.com/2015/06/08/global-warming-the-theory-that-predicts-nothing-and-explains-everything/ they’re signaling for everyone else to get on board. One question raised by the research is whether other global temperature datasets will see similar adjustments. One, kept by the Hadley Center of the UK Met Office, appears to support the global warming “hiatus” narrative—but then, so did NOAA’s dataset up until now. “Before this update, we were the slowest rate of warming,” said Karl. “And with the update now, we’re the leaders of the pack. So as other people make updates, they may end up adjusting upwards as well.” This is going to be the new party line. “Hiatus”? What hiatus? Who are you going to believe, our adjustments or your lying thermometers? I realize the warmists are desperate, but they might not have thought through the overall effect of this new “adjustment” push. We’ve been told to take very, very seriously the objective data showing global warming is real and is happening—and then they announce that the data has been totally changed post hoc. This is meant to shore up the theory, but it actually calls the data into question. As they say, hindsight is 20/20. It’s a lot easier to tweak your theory to make it a better fit to the data, or in this case, to tweak the way the data is measured and analyzed in order to make it better fit your theory. And then you proclaim how amazing it is that your theory “explains” the data.

Former Harvard Physicist Dr. Lubos Motl: Karl et al. hiatus killer is ‘research’ that began with conclusions

Karl et al. hiatus killer is “research” that began with conclusions http://feedproxy.google.com/~r/LuboMotlsReferenceFrame/~3/ocN5NaBfgoE/karl-et-al-hiatus-killer-is-research.html All the major teams that try to quantify the “global mean temperature” indicate that the warming/cooling trend has been basically zero for almost two decades. RSS AMSU, a satellite record, shows a trend that is exactly zero (infinitesimally negative, I guess) in the recent 18.50 years. When you allow small trends that are clearly not statistically significant, you will conclude that even longer recent periods lack any sign of “global warming”.This absence of “global warming” in the recent decades is usually referred to as the “hiatus”. I am using the word myself – and so do most skeptics and alarmists (except for those alarmists who think that the very word is a blasphemy, of course) – but if you asked me, I would probably reply that it is a silly name because the word “hiatus” indicates that it’s a temporary break in some process that otherwise takes place permanently.I don’t really believe that there is any sufficiently reliable, strong, persistent process that could be called “global warming”. So the absence of a temperature trend in a 20-year window is as normal an outcome as you can get. Sometimes, the temperatures get higher in 20 years. Sometimes they get lower. An underlying “warming trend” from some source (e.g. CO2) can make the former possibility slightly more likely than the latter one. Sometimes the temperatures stay nearly constant. There’s no rational need to invent catchy names for these three possibilities, especially not for the most mundane third possibility.A whole discipline of pseudoscience – one pretending to be science, like most pseudosciences – has been created. It is the “climate change science” whose preachers – pretending to be scientists – shout that the sky is falling. The “hiatus” is an inconvenient truth for these “researchers” so as of mid May 2015, they have proposed 63 explanations of the hiatus.The heat is hiding in the ocean and will marvelously jump out, debunk the second law of thermodynamics, and fry the world during Easter 2016. The “hiatus” is due to a Chinese chimney. It was eaten by a dog. And so on, and so on. Finally, as I saw on all the skeptics blogs – but Bill Z. made me much more interested – we have the 64th explanation of the “hiatus”. The “hiatus” no longer exists and the global warming has returned to the recent 20 years as well.No group of 3 alarmists will agree which of the 64 explanations is right but you may be sure that they are collectively right, anyway. It’s how the consensus science works. They know the “right” conclusions even if they don’t have the slightest idea what is going on and what the right explanations of anything could be.The disappearance of the “hiatus”, the 64th explanation of it, is the claim of a Science Magazine articlePossible artifacts of data biases in the recent global surface warming hiatusby Thomas Karl and 8 coauthors from NOAA. They have “revised” their NCDC dataset (they have a new name to replace NCDC but I don’t think that I should write down or you should memorize the new name because it’s counterproductive to be distracted by irrelevant and mutating names of each piece of squashy junk that you may find in a cesspool) and the new NCDC record is the first one that “shows” global warming in the recent two decades, too.You may compare an alarmist blog post with a skeptic blog post on this issue. The canonical alarmist blog post is Gavin Schmidt’sNOAA temperature record updates and the ‘hiatus’ (Real Climate)and I am being extremely generous when I include the debatable letter “m” in Schmidt’s name – while the best skeptical response (or an accumulation of responses) is one by Judith Curry:Has NOAA ‘busted’ the pause in global warming? (Climate Etc.)Thanks, Bill, for this URL!First, the “global warming” trend they get (for the global mean temperature in the 1998-2014 window) is insignificant by even soft scientists’ standards – less than 2-sigma (90% confidence level). And this figure, 90%, is really a cherry-picked maximum from several conceivable, similar calculations. I may discuss these matters later.But even the “microscopic tricks” that led to these adjustments make it extremely likely that the adjusted dataset is less accurate than the unadjusted one. The biggest change occurred in the oceans.Now, one must understand that largely due to the large heat capacity of water, the temperature variations above the ocean are smaller than those above the land. This comment applies to periodic oscillations as well as various recently observed “trends”. This asymmetry between the land masses and the oceans has numerous consequences. For example, the seasonal variations of the global mean temperature emulate those of the Northern Hemisphere – because most of the variations come from the land masses and those lie mostly on the Northern Hemisphere.You may talk about many trends – those in the ocean and above the land; and you may measure them in various ways, satellites, weather stations, weather balloons, buoys, or engine intake of marine vessels ;-). You were expected to laugh when I mentioned the last source of the data – engine intake of marine vessels. It sounds great if one can determine some historical information about the temperatures from this unexpected source. But it wasn’t a gadget that was designed to measure temperatures – unlike satellites, buoys, weather balloons, and weather stations. And unsurprisingly, such “amateur gadgets to monitor temperatures” have some problems that seem to be generally acknowledged. Dick Lindzen et al. quote heat conduction from the vessels. However, the shock is that the warming trend extracted from the marine vessels was copied to the buoys time series. It means that an increasing linear function was simply added to the buoys’ time series for the temperature – to make them more “well-behaved”. Surprisingly for them ;-), once they added an increasing function to the temperature as a function of time, the slope (warming trend) extracted via the linear regression has increased! This surprising mathematical result must be the holy 884th sign of global warming that everyone was looking for! You see what’s going on. The warming trend indicated by the buoys – a project that was specifically designed by scientists to measure the temperature of the ocean – was completely erased by Karl et al. The time series was detrended and the trend was replaced by another trend extracted from a different source, one that wasn’t meant to measure temperatures scientifically. Great.They say that every year, the global warming has to be getting worse and they have so much evidence. But they must be terribly unlucky because in the only two decades in which we had numerous diverse projects specifically designed by scientists to measure temperatures, all of the professional one seem to say that there is a “hiatus” throughout this 20-year period. So the faith finally has to depend on upside down medieval trees and engines in outdated marine vessels.Let me return to their adjustment where the marine vessels knew about the “truth”.If you’re not dull, you will ask: Why didn’t they do just the opposite? They could have repaired the trend from the marine vessels for them to agree with the lower trend from the buoys. You won’t find any answer to this question except for the reminder that the marine vessels have existed for a longer time. That’s great but their long-lived existence in no way implies that their measurement of the trend since 1998 is more accurate than the trend measured from the buoys.I will be happy to listen to another explanation that I overlooked but the conclusion seems totally obvious to me. The trend was “copied” in this way in order to spread the highest trends that randomly appeared in a dataset to all other datasets. It was done simply because the authors prefer higher trends. Higher trends mean higher grants.Imagine that you have 10 sources of temperature measurements that have produced their estimates for the “global warming” trend (in °C per century). Let us assume that the numbers are randomly and Gaussian-normally distributed around the right value, which I call 1, with the standard deviation, also 1 (by the way, a pretty reasonable distribution for the warming trend). You will get numbers such as{2.17, 0.34, -1.53, 0.95, 1.56, -0.18, -0.95, 0.29, 0.38, 1.66}I produced this output in Mathematica commandRound[RandomVariate[NormalDistribution[1, 1], 10], 0.01]Well, the correct value of the trend is not accurately known if you only get the 10 random numbers. If you take the average of the 10 values, you would get 0.469. It’s not quite equal to the correct value 1 (by construction), either, but it’s closer. The standard deviation of the arithmetic average is the square root of ten times smaller than the previous one (than one).What Karl et al. and most other preachers who just decide to write a paper killing an inconvenient fact do is simply to take the greatest value of the trend they may get in any other way and declare it as the truth. In my example, it’s the number 2.17. They adjust all the data to match this highest trend. So the list of 10 trends above is replaced by {2.17, 2.17, 2.17, 2.17, 2.17, 2.17, 2.17, 2.17, 2.17, 2.17}Nice. The warming trend seems clear now. It’s 4.6 times higher than one that we would have gotten from the average value of the noisy numbers and 2.17 times greater than the correct one. The maximum among 10 random numbers from a distribution is clearly “very likely” to be higher than the actual mean value. To choose the maximum means to cherry-pick, to be biased.And Dick Lindzen and his CATO colleagues suggest that this step has taken place repeatedly, even in this single paper. They always needed a “better” reconstruction of the temperature in the Arctic Ocean. So they just copied the trends from the Arctic lands.This is likely to produce a bogus warming signal simply because much of the Arctic ocean is covered by ice throughout the year. And when water and ice co-exist, well, the temperature has to be extremely close to 0 °C, the only temperature at which both phases may exist, and therefore the trend at these constantly frozen places is likely to be close to zero. (I hope that the U.S. readers agree that this is a rather easy-to-remember number to describe the freezing point, even if the unknown Democratic Party candidate called Chaffee or something like that who wants to introduce the metric system in the U.S. will lose, as everyone expects. He fell in love with it in Canada.)Quite generally, all the trends and fluctuations associated with the land are greater than those around the ocean and this is bound to be true in the Arctic as well, probably even away from ice.It is not possible to be “quite certain” that the desire to push the data in the “preferred” direction was the motivation behind a particular adjustment. However, when you see too many of them and a shockingly high percentage of them always works to increase the trend, well, you may become rather certain that something dishonest is going on. Your confidence may actually be calculated. If there are 20 adjustments whose signs should be independent but you get 20 times plus, the probability that this occurs by chance is 1/2^20 or one part in a million.With twenty “plus signs” and zero “minus signs” of the adjustments that increase the alarmists’ certainty that there exists an important persistent global warming trend, you may become 99.9999% certain of their misconduct. It’s great if those folks may achieve 90% “certainty” that their ideology about the climate threats is right. Meanwhile, you may achieve a 99.9999% or 5-sigma certainty that they are crooks who should spend the rest of their life in prison – or hanging.Judith Curry’s blog contains lots of other (partially overlapping) observations about the paper by Karl et al. made by various people. It’s staggering if you compare this serious scientific analysis with the blog post by Gavin Schmidt that talks about virtually none of these technical issues. But one sentence by Schmidt seems to be his “main point”:The real conclusion is that this criteria for a ‘hiatus’ is simply not a robust measure of anything.Funny. The fundamental dogmas of the warming church force you to say that 90% confidence level is “near certainty” while no inconvenient truths can ever be “a robust measure of anything”. That’s nice but the hiatus is still there and statistically significantly contradicts the predictions of the dangerous global warming theory. That theory claims that 20 years are enough for a statistically significant trend to emerge. That’s why every decade of “fight against climate change” is said to be important by the anti-carbon jihadists!So if your theory predicts a significant trend in 20-year windows but data show that the trend isn’t there, your theory is still falsified and with your fog or without it, you should still be arrested or hanged, Mr Schmidt. — gReader Pro

Global warming standstill/pause increases to ‘a new record length’: 18 years 6 months’

Special to Climate Depot El Niño strengthens: the Pause lengthens Global temperature update: no warming for 18 years 6 months By Christopher Monckton of Brenchley For 222 months, since December 1996, there has been no global warming at all (Fig. 1). This month’s RSS temperature – still unaffected by a slowly strengthening el Niño, which will eventually cause temporary warming – passes another six-month milestone, and establishes a new record length for the Pause: 18 years 6 months. What is more, the IPCC’s centrally-predicted warming rate since its First Assessment Report in 1990 is now more than two and a half times the measured rate. On any view, the predictions on which the entire climate scare was based were extreme exaggerations. However, it is becoming ever more likely that the temperature increase that usually accompanies an el Niño may come through after a lag of four or five months. The Pause may yet shorten somewhat, just in time for the Paris climate summit, though a subsequent La Niña would be likely to bring about a resumption of the Pause. Figure 1. The least-squares linear-regression trend on the RSS satellite monthly global mean surface temperature anomaly dataset shows no global warming for 18 years 6 months since December 1996. The hiatus period of 18 years 6 months is the farthest back one can go in the RSS satellite temperature record and still show a sub-zero trend. Note that the start date is not cherry-picked: it is calculated. And the graph does not mean there is no such thing as global warming. Going back further shows a small warming rate. The divergence between the models’ predictions in 1990 (Fig. 2) and 2005 (Fig. 3), on the one hand, and the observed outturn, on the other, continues to widen. For the time being, these two graphs will be based on RSS alone, since the text file for the new UAH v.6 dataset is not yet being updated monthly. However, the effect of the recent UAH adjustments – exceptional in that they are the only such adjustments I can recall that reduce the previous trend rather than steepening it – is to bring the UAH dataset very close to that of RSS, so that there is now a clear distinction between the satellite and terrestrial datasets, particularly since the latter were subjected to adjustments over the past year or two that steepened the apparent rate of warming. Figure 2. Near-term projections of warming at a rate equivalent to 2.8 [1.9, 4.2] K/century, made with “substantial confidence” in IPCC (1990), for the 305 months January 1990 to May 2015 (orange region and red trend line), vs. observed anomalies (dark blue) and trend (bright blue) at less than 1.1 K/century equivalent, taken as the mean of the RSS and UAH v. 5.6 satellite monthly mean lower-troposphere temperature anomalies. Figure 3. Predicted temperature change, January 2005 to May 2015, at a rate equivalent to 1.7 [1.0, 2.3] Cº/century (orange zone with thick red best-estimate trend line), compared with the near-zero observed anomalies (dark blue) and real-world trend (bright blue), taken as the mean of the RSS and UAH v. 5.6 satellite lower-troposphere temperature anomalies. The Technical Note explains the sources of the IPCC’s predictions in 1990 and in 2005, and also demonstrates that that according to the ARGO bathythermograph data the oceans are warming at a rate equivalent to less than a quarter of a Celsius degree per century. Key facts about global temperature The RSS satellite dataset shows no global warming at all for 222 months from December 1996 to May 2015 – more than half the 437-month satellite record. The entire RSS dataset from January 1979 to date shows global warming at an unalarming rate equivalent to just 1.2 Cº per century. Since 1950, when a human influence on global temperature first became theoretically possible, the global warming trend has been equivalent to below 1.2 Cº per century. The global warming trend since 1900 is equivalent to 0.8 Cº per century. This is well within natural variability and may not have much to do with us. The fastest warming rate lasting 15 years or more since 1950 occurred over the 33 years from 1974 to 2006. It was equivalent to 2.0 Cº per century. In 1990, the IPCC’s mid-range prediction of near-term warming was equivalent to 2.8 Cº per century, higher by two-thirds than its current prediction of 1.7 Cº/century. The warming trend since 1990, when the IPCC wrote its first report is equivalent to 1.1 Cº per century. The IPCC had predicted two and a half times as much. Though the IPCC has cut its near-term warming prediction, it has not cut its high-end business as usual centennial warming prediction of 4.8 Cº warming to 2100. The IPCC’s predicted 4.8 Cº warming by 2100 is well over twice the greatest rate of warming lasting more than 15 years that has been measured since 1950. The IPCC’s 4.8 Cº-by-2100 prediction is four times the observed real-world warming trend since we might in theory have begun influencing it in 1950. The oceans, according to the 3600+ ARGO bathythermograph buoys, are warming at a rate of just 0.02 Cº per decade, equivalent to 0.23 Cº per century. Recent extreme-weather events cannot be blamed on global warming, because there has not been any global warming to speak of. It is as simple as that. Technical note Our latest topical graph shows the least-squares linear-regression trend on the RSS satellite monthly global mean lower-troposphere dataset for as far back as it is possible to go and still find a zero trend. The start-date is not “cherry-picked” so as to coincide with the temperature spike caused by the 1998 el Niño. Instead, it is calculated so as to find the longest period with a zero trend. The satellite datasets are arguably less unreliable than other datasets in that they show the 1998 Great El Niño more clearly than all other datasets. The Great el Niño, like its two predecessors in the past 300 years, caused widespread global coral bleaching, providing an independent verification that the satellite datasets are better able to capture such fluctuations without artificially filtering them out than other datasets. Terrestrial temperatures are measured by thermometers. Thermometers correctly sited in rural areas away from manmade heat sources show warming rates below those that are published. The satellite datasets are based on reference measurements made by the most accurate thermometers available – platinum resistance thermometers, which provide an independent verification of the temperature measurements by checking via spaceward mirrors the known temperature of the cosmic background radiation, which is 1% of the freezing point of water, or just 2.73 degrees above absolute zero. It was by measuring minuscule variations in the cosmic background radiation that the NASA anisotropy probe determined the age of the Universe: 13.82 billion years. The RSS graph (Fig. 1) is accurate. The data are lifted monthly straight from the RSS website. A computer algorithm reads them down from the text file and plots them automatically using an advanced routine that automatically adjusts the aspect ratio of the data window at both axes so as to show the data at maximum scale, for clarity. The latest monthly data point is visually inspected to ensure that it has been correctly positioned. The light blue trend line plotted across the dark blue spline-curve that shows the actual data is determined by the method of least-squares linear regression, which calculates the y-intercept and slope of the line. The IPCC and most other agencies use linear regression to determine global temperature trends. Professor Phil Jones of the University of East Anglia recommends it in one of the Climategate emails. The method is appropriate because global temperature records exhibit little auto-regression, since summer temperatures in one hemisphere are compensated by winter in the other. Therefore, an AR(n) model would generate results little different from a least-squares trend. Dr Stephen Farish, Professor of Epidemiological Statistics at the University of Melbourne, kindly verified the reliability of the algorithm that determines the trend on the graph and the correlation coefficient, which is very low because, though the data are highly variable, the trend is flat. RSS itself is now taking a serious interest in the length of the Great Pause. Dr Carl Mears, the senior research scientist at RSS, discusses it at remss.com/blog/recent-slowing-rise-global-temperatures. Dr Mears’ results are summarized in Fig. T1: Figure T1. Output of 33 IPCC models (turquoise) compared with measured RSS global temperature change (black), 1979-2014. The transient coolings caused by the volcanic eruptions of Chichón (1983) and Pinatubo (1991) are shown, as is the spike in warming caused by the great el Niño of 1998. Dr Mears writes: “The denialists like to assume that the cause for the model/observation discrepancy is some kind of problem with the fundamental model physics, and they pooh-pooh any other sort of explanation.  This leads them to conclude, very likely erroneously, that the long-term sensitivity of the climate is much less than is currently thought.” Dr Mears concedes the growing discrepancy between the RSS data and the models, but he alleges “cherry-picking” of the start-date for the global-temperature graph: “Recently, a number of articles in the mainstream press have pointed out that there appears to have been little or no change in globally averaged temperature over the last two decades.  Because of this, we are getting a lot of questions along the lines of ‘I saw this plot on a denialist web site.  Is this really your data?’  While some of these reports have ‘cherry-picked’ their end points to make their evidence seem even stronger, there is not much doubt that the rate of warming since the late 1990s is less than that predicted by most of the IPCC AR5 simulations of historical climate.  … The denialists really like to fit trends starting in 1997, so that the huge 1997-98 ENSO event is at the start of their time series, resulting in a linear fit with the smallest possible slope.” In fact, the spike in temperatures caused by the Great el Niño of 1998 is almost entirely offset in the linear-trend calculation by two factors: the not dissimilar spike of the 2010 el Niño, and the sheer length of the Great Pause itself. Curiously, Dr Mears prefers the much-altered terrestrial datasets to the satellite datasets. The UK Met Office, however, uses the satellite record to calibrate its own terrestrial record. The length of the Great Pause in global warming, significant though it now is, is of less importance than the ever-growing discrepancy between the temperature trends predicted by models and the far less exciting real-world temperature change that has been observed. It remains possible that el Nino-like conditions may prevail this year, reducing the length of the Great Pause. However, the discrepancy between prediction and observation continues to widen. Sources of the IPCC projections in Figs. 2 and 3 IPCC’s First Assessment Report predicted that global temperature would rise by 1.0 [0.7, 1.5] Cº to 2025, equivalent to 2.8 [1.9, 4.2] Cº per century. The executive summary asked, “How much confidence do we have in our predictions?” IPCC pointed out some uncertainties (clouds, oceans, etc.), but concluded: “Nevertheless, … we have substantial confidence that models can predict at least the broad-scale features of climate change. … There are similarities between results from the coupled models using simple representations of the ocean and those using more sophisticated descriptions, and our understanding of such differences as do occur gives us some confidence in the results.” That “substantial confidence” was substantial over-confidence. For the rate of global warming since 1990 – the most important of the “broad-scale features of climate change” that the models were supposed to predict – is now below half what the IPCC had then predicted. In 1990, the IPCC said this: “Based on current models we predict: “under the IPCC Business-as-Usual (Scenario A) emissions of greenhouse gases, a rate of increase of global mean temperature during the next century of about 0.3 Cº per decade (with an uncertainty range of 0.2 Cº to 0.5 Cº per decade), this is greater than that seen over the past 10,000 years. This will result in a likely increase in global mean temperature of about 1 Cº above the present value by 2025 and 3 Cº before the end of the next century. The rise will not be steady because of the influence of other factors” (p. xii). Later, the IPCC said: “The numbers given below are based on high-resolution models, scaled to be consistent with our best estimate of global mean warming of 1.8 Cº by 2030. For values consistent with other estimates of global temperature rise, the numbers below should be reduced by 30% for the low estimate or increased by 50% for the high estimate” (p. xxiv). The orange region in Fig. 2 represents the IPCC’s less extreme medium-term Scenario-A estimate of near-term warming, i.e. 1.0 [0.7, 1.5] K by 2025, rather than its more extreme Scenario-A estimate, i.e. 1.8 [1.3, 3.7] K by 2030. It has been suggested that the IPCC did not predict the straight-line global warming rate that is shown in Figs. 2-3. In fact, however, its predicted global warming over so short a term as the 25 years from 1990 to the present differs little from a straight line (Fig. T2). Figure T2. Historical warming from 1850-1990, and predicted warming from 1990-2100 on the IPCC’s “business-as-usual” Scenario A (IPCC, 1990, p. xxii). Because this difference between a straight line and the slight uptick in the warming rate the IPCC predicted over the period 1990-2025 is so small, one can look at it another way. To reach the 1 K central estimate of warming since 1990 by 2025, there would have to be twice as much warming in the next ten years as there was in the last 25 years. That is not likely. Likewise, to reach 1.8 K by 2030, there would have to be four or five times as much warming in the next 15 years as there was in the last 25 years. That is still less likely. But is the Pause perhaps caused by the fact that CO2 emissions have not been rising anything like as fast as the IPCC’s “business-as-usual” Scenario A prediction in 1990? No: CO2 emissions have risen rather above the Scenario-A prediction (Fig. T3). Figure T3. CO2 emissions from fossil fuels, etc., in 2012, from Le Quéré et al. (2014), plotted against the chart of “man-made carbon dioxide emissions”, in billions of tonnes of carbon per year, from IPCC (1990). Plainly, therefore, CO2 emissions since 1990 have proven to be closer to Scenario A than to any other case, because for all the talk about CO2 emissions reduction the fact is that the rate of expansion of fossil-fuel burning in China, India, Indonesia, Brazil, etc., far outstrips the paltry reductions we have achieved in the West to date. True, methane concentration has not risen as predicted in 1990 (Fig. T4), for methane emissions, though largely uncontrolled, are simply not rising as the models had predicted. Here, too, all of the predictions were extravagantly baseless. The overall picture is clear. Scenario A is the emissions scenario from 1990 that is closest to the observed CO2 emissions outturn. Figure T4. Methane concentration as predicted in four IPCC Assessment Reports, together with (in black) the observed outturn, which is running along the bottom of the least prediction. This graph appeared in the pre-final draft of IPCC (2013), but had mysteriously been deleted from the final, published version, inferentially because the IPCC did not want to display such a plain comparison between absurdly exaggerated predictions and unexciting reality. To be precise, a quarter-century after 1990, the global-warming outturn to date – expressed as the least-squares linear-regression trend on the mean of the RSS and UAH monthly global mean surface temperature anomalies – is 0.27 Cº, equivalent to less than 1.1 Cº/century. The IPCC’s central estimate of 0.71 Cº, equivalent to 2.8 Cº/century, that was predicted for Scenario A in IPCC (1990) with “substantial confidence” was two and a half times too big. In fact, the outturn is visibly well below even the least estimate. In 1990, the IPCC’s central prediction of the near-term warming rate was higher by two-thirds than its prediction is today. Then it was 2.8 C/century equivalent. Now it is just 1.7 Cº equivalent – and, as Fig. T5 shows, even that is proving to be a substantial exaggeration. Is the ocean warming? One frequently-discussed explanation for the Great Pause is that the coupled ocean-atmosphere system has continued to accumulate heat at approximately the rate predicted by the models, but that in recent decades the heat has been removed from the atmosphere by the ocean and, since globally the near-surface strata show far less warming than the models had predicted, it is hypothesized that what is called the “missing heat” has traveled to the little-measured abyssal strata below 2000 m, whence it may emerge at some future date. Actually, it is not known whether the ocean is warming: each of the 3600 automated ARGO bathythermograph buoys takes just three measurements a month in 200,000 cubic kilometres of ocean – roughly a 100,000-square-mile box more than 316 km square and 2 km deep. Plainly, the results on the basis of a resolution that sparse (which, as Willis Eschenbach puts it, is approximately the equivalent of trying to take a single temperature and salinity profile taken at a single point in Lake Superior less than once a year) are not going to be a lot better than guesswork. Unfortunately ARGO seems not to have updated the ocean dataset since December 2014. However, what we have gives us 11 full years of data. Results are plotted in Fig. T5. The ocean warming, if ARGO is right, is equivalent to just 0.02 Cº decade–1, equivalent to 0.2 Cº century–1. Figure T5. The entire near-global ARGO 2 km ocean temperature dataset from January 2004 to December 2014 (black spline-curve), with the least-squares linear-regression trend calculated from the data by the author (green arrow). Finally, though the ARGO buoys measure ocean temperature change directly, before publication NOAA craftily converts the temperature change into zettajoules of ocean heat content change, which make the change seem a whole lot larger. The terrifying-sounding heat content change of 260 ZJ from 1970 to 2014 (Fig. T6) is equivalent to just 0.2 K/century of global warming. All those “Hiroshima bombs of heat” of which the climate-extremist websites speak are a barely discernible pinprick. The ocean and its heat capacity are a lot bigger than some may realize. Figure T6. Ocean heat content change, 1957-2013, in Zettajoules from NOAA’s NODC Ocean Climate Lab: http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT, with the heat content values converted back to the ocean temperature changes in Kelvin that were originally measured. NOAA’s conversion of the minuscule warming data to Zettajoules, combined with the exaggerated vertical aspect of the graph, has the effect of making a very small change in ocean temperature seem considerably more significant than it is. Converting the ocean heat content change back to temperature change reveals an interesting discrepancy between NOAA’s data and that of the ARGO system. Over the period of ARGO data, from 2004-2014, the NOAA data imply that the oceans are warming at 0.05 Cº decade–1, equivalent to 0.5 Cº century–1, or rather more than double the rate shown by ARGO. ARGO has the better-resolved dataset, but since the resolutions of all ocean datasets are very low one should treat all these results with caution. What one can say is that, on such evidence as these datasets are capable of providing, the difference between underlying warming rate of the ocean and that of the atmosphere is not statistically significant, suggesting that if the “missing heat” is hiding in the oceans it has magically found its way into the abyssal strata without managing to warm the upper strata on the way. On these data, too, there is no evidence of rapid or catastrophic ocean warming. Furthermore, to date no empirical, theoretical or numerical method, complex or simple, has yet successfully specified mechanistically either how the heat generated by anthropogenic greenhouse-gas enrichment of the atmosphere has reached the deep ocean without much altering the heat content of the intervening near-surface strata or how the heat from the bottom of the ocean may eventually re-emerge to perturb the near-surface climate conditions relevant to land-based life on Earth. Most ocean models used in performing coupled general-circulation model sensitivity runs simply cannot resolve most of the physical processes relevant for capturing heat uptake by the deep ocean. Ultimately, the second law of thermodynamics requires that any heat which may have accumulated in the deep ocean will dissipate via various diffusive processes. It is not plausible that any heat taken up by the deep ocean will suddenly warm the upper ocean and, via the upper ocean, the atmosphere. If the “deep heat” explanation for the Pause were correct (and it is merely one among dozens that have been offered), the complex models have failed to account for it correctly: otherwise, the growing discrepancy between the predicted and observed atmospheric warming rates would not have become as significant as it has. Why were the models’ predictions exaggerated? In 1990 the IPCC predicted – on its business-as-usual Scenario A – that from the Industrial Revolution till the present there would have been 4 Watts per square meter of radiative forcing caused by Man (Fig. T7): Figure T7. Predicted manmade radiative forcings (IPCC, 1990). However, from 1995 onward the IPCC decided to assume, on rather slender evidence, that anthropogenic particulate aerosols – mostly soot from combustion – were shading the Earth from the Sun to a large enough extent to cause a strong negative forcing. It has also now belatedly realized that its projected increases in methane concentration were wild exaggerations. As a result of these and other changes, it now estimates that the net anthropogenic forcing of the industrial era is just 2.3 Watts per square meter, or little more than half its prediction in 1990: Figure T8: Net anthropogenic forcings, 1750 to 1950, 1980 and 2012 (IPCC, 2013). Even this, however, may be a considerable exaggeration. For the best estimate of the actual current top-of-atmosphere radiative imbalance (total natural and anthropo-genic net forcing) is only 0.6 Watts per square meter (Fig. T9): Figure T9. Energy budget diagram for the Earth from Stephens et al. (2012) In short, most of the forcing predicted by the IPCC is either an exaggeration or has already resulted in whatever temperature change it was going to cause. There is little global warming in the pipeline as a result of our past and present sins of emission. It is also possible that the IPCC and the models have relentlessly exaggerated climate sensitivity. One recent paper on this question is Monckton of Brenchley et al. (2015), which found climate sensitivity to be in the region of 1 Cº per CO2 doubling (go to scibull.com and click “Most Read Articles”). The paper identified errors in the models’ treatment of temperature feedbacks and their amplification, which account for two-thirds of the equilibrium warming predicted by the IPCC. Professor Ray Bates will shortly give a paper in Moscow in which he will conclude, based on the analysis by Lindzen & Choi (2009, 2011) (Fig. T10), that temperature feedbacks are net-negative. Accordingly, he supports the conclusion both by Lindzen & Choi and by Spencer & Braswell (2010, 2011) that climate sensitivity is below – and perhaps considerably below – 1 Cº per CO2 doubling. Figure T10. Reality (center) vs. 11 models. From Lindzen & Choi (2009). A growing body of reviewed papers find climate sensitivity considerably below the 3 [1.5, 4.5] Cº per CO2 doubling that was first put forward in the Charney Report of 1979 for the U.S. National Academy of Sciences, and is still the IPCC’s best estimate today. On the evidence to date, therefore, there is no scientific basis for taking any action at all to mitigate CO2 emissions.  

Global warming ‘pause’ expands to ‘new record length’: No warming for 18 years 5 months

Special to Climate Depot El Niño has not yet paused the Pause Global temperature update: no warming for 18 years 5 months By Christopher Monckton of Brenchley Since December 1996 there has been no global warming at all (Fig. 1). This month’s RSS temperature – still unaffected by the most persistent el Niño conditions of the current weak cycle – shows a new record length for the Pause: 18 years 5 months. The result, as always, comes with a warning that the temperature increase that usually accompanies an el Niño may come through after a lag of four or five months. If, on the other hand, la Niña conditions begin to cool the oceans in time, there could be a lengthening of the Pause just in time for the Paris world-government summit in December 2015. Figure 1. The least-squares linear-regression trend on the RSS satellite monthly global mean surface temperature anomaly dataset shows no global warming for 18 years 5 months since December 1996. The hiatus period of 18 years 5 months, or 221 months, is the farthest back one can go in the RSS satellite temperature record and still show a sub-zero trend. The divergence between the models’ predictions in 1990 (Fig. 2) and 2005 (Fig. 3), on the one hand, and the observed outturn, on the other, also continues to widen. Figure 2. Near-term projections of warming at a rate equivalent to 2.8 [1.9, 4.2] K/century, made with “substantial confidence” in IPCC (1990), for the 303 months January 1990 to March 2015 (orange region and red trend line), vs. observed anomalies (dark blue) and trend (bright blue) at less than 1.4 K/century equivalent, taken as the mean of the RSS and UAH v. 5.6 satellite monthly mean lower-troposphere temperature anomalies. Figure 3. Predicted temperature change, January 2005 to March 2015, at a rate equivalent to 1.7 [1.0, 2.3] Cº/century (orange zone with thick red best-estimate trend line), compared with the near-zero observed anomalies (dark blue) and real-world trend (bright blue), taken as the mean of the RSS and UAH v. 5.6 satellite lower-troposphere temperature anomalies. The Technical Note explains the sources of the IPCC’s predictions in 1990 and in 2005, and also demonstrates that that according to the ARGO bathythermograph data the oceans are warming at a rate equivalent to less than a quarter of a Celsius degree per century. There are also details of the long-awaited beta-test version 6.0 of the University of Alabama at Huntsville’s satellite lower-troposphere dataset, which now shows a pause very nearly as long as the RSS dataset. However, the data are not yet in a form compatible with the earlier version, so v. 6 will not be used here until the beta testing is complete. Key facts about global temperature The RSS satellite dataset shows no global warming at all for 221 months from December 1996 to April 2015 – more than half the 436-month satellite record. The global warming trend since 1900 is equivalent to 0.8 Cº per century. This is well within natural variability and may not have much to do with us. Since 1950, when a human influence on global temperature first became theoretically possible, the global warming trend has been equivalent to below 1.2 Cº per century. The fastest warming rate lasting 15 years or more since 1950 occurred over the 33 years from 1974 to 2006. It was equivalent to 2.0 Cº per century. In 1990, the IPCC’s mid-range prediction of near-term warming was equivalent to 2.8 Cº per century, higher by two-thirds than its current prediction of 1.7 Cº/century. The global warming trend since 1990, when the IPCC wrote its first report, is equivalent to below 1.4 Cº per century – half of what the IPCC had then predicted. Though the IPCC has cut its near-term warming prediction, it has not cut its high-end business as usual centennial warming prediction of 4.8 Cº warming to 2100. The IPCC’s predicted 4.8 Cº warming by 2100 is well over twice the greatest rate of warming lasting more than 15 years that has been measured since 1950. The IPCC’s 4.8 Cº-by-2100 prediction is almost four times the observed real-world warming trend since we might in theory have begun influencing it in 1950. The oceans, according to the 3600+ ARGO bathythermograph buoys, are warming at a rate equivalent to just 0.02 Cº per decade, or 0.23 Cº per century. Recent extreme weather cannot be blamed on global warming, because there has not been any global warming to speak of. It is as simple as that.  Technical note Our latest topical graph shows the least-squares linear-regression trend on the RSS satellite monthly global mean lower-troposphere dataset for as far back as it is possible to go and still find a zero trend. The start-date is not “cherry-picked” so as to coincide with the temperature spike caused by the 1998 el Niño. Instead, it is calculated so as to find the longest period with a zero trend. The satellite datasets are arguably less unreliable than other datasets in that they show the 1998 Great El Niño more clearly than all other datasets. The Great el Niño, like its two predecessors in the past 300 years, caused widespread global coral bleaching, providing an independent verification that the satellite datasets are better able to capture such fluctuations without artificially filtering them out than other datasets. Terrestrial temperatures are measured by thermometers. Thermometers correctly sited in rural areas away from manmade heat sources show warming rates below those that are published. The satellite datasets are based on reference measurements made by the most accurate thermometers available – platinum resistance thermometers, which provide an independent verification of the temperature measurements by checking via spaceward mirrors the known temperature of the cosmic background radiation, which is 1% of the freezing point of water, or just 2.73 degrees above absolute zero. It was by measuring minuscule variations in the cosmic background radiation that the NASA anisotropy probe determined the age of the Universe: 13.82 billion years. The RSS graph (Fig. 1) is accurate. The data are lifted monthly straight from the RSS website. A computer algorithm reads them down from the text file, takes their mean and plots them automatically using an advanced routine that automatically adjusts the aspect ratio of the data window at both axes so as to show the data at maximum scale, for clarity. The latest monthly data point is visually inspected to ensure that it has been correctly positioned. The light blue trend line plotted across the dark blue spline-curve that shows the actual data is determined by the method of least-squares linear regression, which calculates the y-intercept and slope of the line. The IPCC and most other agencies use linear regression to determine global temperature trends. Professor Phil Jones of the University of East Anglia recommends it in one of the Climategate emails. The method is appropriate because global temperature records exhibit little auto-regression, since summer temperatures in one hemisphere are compensated by winter in the other. Therefore, an AR(n) model generates results little different from a least-squares trend. Dr Stephen Farish, Professor of Epidemiological Statistics at the University of Melbourne, kindly verified the reliability of the algorithm that determines the trend on the graph and the correlation coefficient, which is very low because, though the data are highly variable, the trend is flat. RSS itself is now taking a serious interest in the length of the Great Pause. Dr Carl Mears, the senior research scientist at RSS, discusses it at remss.com/blog/recent-slowing-rise-global-temperatures. Dr Mears’ results are summarized in Fig. T1: Figure T1. Output of 33 IPCC models (turquoise) compared with measured RSS global temperature change (black), 1979-2014. The transient coolings caused by the volcanic eruptions of Chichón (1983) and Pinatubo (1991) are shown, as is the spike in warming caused by the great el Niño of 1998. Dr Mears writes: “The denialists like to assume that the cause for the model/observation discrepancy is some kind of problem with the fundamental model physics, and they pooh-pooh any other sort of explanation.  This leads them to conclude, very likely erroneously, that the long-term sensitivity of the climate is much less than is currently thought.” Dr Mears concedes the growing discrepancy between the RSS data and the models, but he alleges “cherry-picking” of the start-date for the global-temperature graph: “Recently, a number of articles in the mainstream press have pointed out that there appears to have been little or no change in globally averaged temperature over the last two decades.  Because of this, we are getting a lot of questions along the lines of ‘I saw this plot on a denialist web site.  Is this really your data?’  While some of these reports have ‘cherry-picked’ their end points to make their evidence seem even stronger, there is not much doubt that the rate of warming since the late 1990s is less than that predicted by most of the IPCC AR5 simulations of historical climate.  … The denialists really like to fit trends starting in 1997, so that the huge 1997-98 ENSO event is at the start of their time series, resulting in a linear fit with the smallest possible slope.” In fact, the spike in temperatures caused by the Great el Niño of 1998 is largely offset in the linear-trend calculation by two factors: the not dissimilar spike of the 2010 el Niño, and the sheer length of the Great Pause itself. Curiously, Dr Mears prefers the much-altered terrestrial datasets to the satellite datasets. However, over the entire length of the RSS and UAH series since 1979, the trends on the mean of the terrestrial datasets and on the mean of the satellite datasets are near-identical. Indeed, the UK Met Office uses the satellite record to calibrate its own terrestrial record. The length of the Great Pause in global warming, significant though it now is, is of less importance than the ever-growing discrepancy between the temperature trends predicted by models and the far less exciting real-world temperature change that has been observed. It remains possible that el Nino-like conditions may prevail this year, reducing the length of the Great Pause. However, the discrepancy between prediction and observation continues to widen. Sources of the IPCC projections in Figs. 2 and 3 IPCC’s First Assessment Report predicted that global temperature would rise by 1.0 [0.7, 1.5] Cº to 2025, equivalent to 2.8 [1.9, 4.2] Cº per century. The executive summary asked, “How much confidence do we have in our predictions?” IPCC pointed out some uncertainties (clouds, oceans, etc.), but concluded: “Nevertheless, … we have substantial confidence that models can predict at least the broad-scale features of climate change. … There are similarities between results from the coupled models using simple representations of the ocean and those using more sophisticated descriptions, and our understanding of such differences as do occur gives us some confidence in the results.” That “substantial confidence” was substantial over-confidence. For the rate of global warming since 1990 – the most important of the “broad-scale features of climate change” that the models were supposed to predict – is now below half what the IPCC had then predicted. In 1990, the IPCC said this: “Based on current models we predict: “under the IPCC Business-as-Usual (Scenario A) emissions of greenhouse gases, a rate of increase of global mean temperature during the next century of about 0.3 Cº per decade (with an uncertainty range of 0.2 Cº to 0.5 Cº per decade), this is greater than that seen over the past 10,000 years. This will result in a likely increase in global mean temperature of about 1 Cº above the present value by 2025 and 3 Cº before the end of the next century. The rise will not be steady because of the influence of other factors” (p. xii). Later, the IPCC said: “The numbers given below are based on high-resolution models, scaled to be consistent with our best estimate of global mean warming of 1.8 Cº by 2030. For values consistent with other estimates of global temperature rise, the numbers below should be reduced by 30% for the low estimate or increased by 50% for the high estimate” (p. xxiv). The orange region in Fig. 2 represents the IPCC’s less extreme medium-term Scenario-A estimate of near-term warming, i.e. 1.0 [0.7, 1.5] K by 2025, rather than its more extreme Scenario-A estimate, i.e. 1.8 [1.3, 3.7] K by 2030. Some try to say the IPCC did not predict the straight-line global warming rate that is shown in Figs. 2-3. In fact, however, the IPCC’s predicted global warming over so short a term as the 25 years from 1990 to the present are little different from a straight line (Fig. T2). Figure T2. Historical warming from 1850-1990, and predicted warming from 1990-2100 on the IPCC’s “business-as-usual” Scenario A (IPCC, 1990, p. xxii). Because this difference between a straight line and the slight uptick in the warming rate the IPCC predicted over the period 1990-2025 is so small, one can look at it another way. To reach the 1 K central estimate of warming since 1990 by 2025, there would have to be twice as much warming in the next ten years as there was in the last 25 years. That is not likely. Likewise, to reach 1.8 K by 2030, there would have to be four or five times as much warming in the next 15 years as there was in the last 25 years. That is still less likely. But is the Pause perhaps caused by the fact that CO2 emissions have not been rising anything like as fast as the IPCC’s “business-as-usual” Scenario A prediction in 1990? No: CO2 emissions have risen rather above the Scenario-A prediction (Fig. T3). Figure T3. CO2 emissions from fossil fuels, etc., in 2012, from Le Quéré et al. (2014), plotted against the chart of “man-made carbon dioxide emissions”, in billions of tonnes of carbon per year, from IPCC (1990). Plainly, therefore, CO2 emissions since 1990 have proven to be closer to Scenario A than to any other case, because for all the talk about CO2 emissions reduction the fact is that the rate of expansion of fossil-fuel burning in China, India, Indonesia, Brazil, etc., far outstrips the paltry reductions we have achieved in the West to date. True, methane concentration has not risen as predicted in 1990 (Fig. T4), for methane emissions, though largely uncontrolled, are simply not rising as the models had predicted, and the predictions were extravagantly baseless. The overall picture is clear. Scenario A is the emissions scenario from 1990 that is closest to the observed emissions outturn, and yet there has only been a third of a degree of global warming since 1990 – about half of what the IPCC had then predicted with what it called “substantial confidence”. Figure T4. Methane concentration as predicted in four IPCC Assessment Reports, together with (in black) the observed outturn, which is running along the bottom of the least prediction. This graph appeared in the pre-final draft of IPCC (2013), but had mysteriously been deleted from the final, published version, inferentially because the IPCC did not want to display such a plain comparison between absurdly exaggerated predictions and unexciting reality. To be precise, a quarter-century after 1990, the global-warming outturn to date – expressed as the least-squares linear-regression trend on the mean of the RSS and UAH monthly global mean surface temperature anomalies – is 0.35 Cº, equivalent to just 1.4 Cº/century, or a little below half of the central estimate of 0.70 Cº, equivalent to 2.8 Cº/century, that was predicted for Scenario A in IPCC (1990). The outturn is visibly well below even the least estimate. In 1990, the IPCC’s central prediction of the near-term warming rate was higher by two-thirds than its prediction is today. Then it was 2.8 C/century equivalent. Now it is just 1.7 Cº equivalent – and, as Fig. T5 shows, even that is proving to be a substantial exaggeration. Is the ocean warming? One frequently-discussed explanation for the Great Pause is that the coupled ocean-atmosphere system has continued to accumulate heat at approximately the rate predicted by the models, but that in recent decades the heat has been removed from the atmosphere by the ocean and, since globally the near-surface strata show far less warming than the models had predicted, it is hypothesized that what is called the “missing heat” has traveled to the little-measured abyssal strata below 2000 m, whence it may emerge at some future date. Actually, it is not known whether the ocean is warming: each of the 3600 automated ARGO bathythermograph buoys somehow has to cover 200,000 cubic kilometres of ocean – a 100,000-square-mile box more than 316 km square and 2 km deep. Plainly, the results on the basis of a resolution that sparse (which, as Willis Eschenbach puts it, is approximately the equivalent of trying to take a single temperature and salinity profile taken at a single point in Lake Superior less than once a year) are not going to be a lot better than guesswork. Fortunately, a long-standing bug in the ARGO data delivery system has now been fixed, so I am able to get the monthly global mean ocean temperature data – though ARGO seems not to have updated the dataset since December 2014. However, that gives us 11 full years of data. Results are plotted in Fig. T5. The ocean warming, if ARGO is right, is equivalent to just 0.02 Cº decade–1, or 0.2 Cº century–1 equivalent. Figure T5. The entire near-global ARGO 2 km ocean temperature dataset from January 2004 to December 2014 (black spline-curve), with the least-squares linear-regression trend calculated from the data by the author (green arrow). Finally, though the ARGO buoys measure ocean temperature change directly, before publication NOAA craftily converts the temperature change into zettajoules of ocean heat content change, which make the change seem a whole lot larger. The terrifying-sounding heat content change of 260 ZJ from 1970 to 2014 (Fig. T6) is equivalent to just 0.2 K/century of global warming. All those “Hiroshima bombs of heat” are a barely discernible pinprick. The ocean and its heat capacity are a lot bigger than some may realize. Figure T6. Ocean heat content change, 1957-2013, in Zettajoules from NOAA’s NODC Ocean Climate Lab: http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT, with the heat content values converted back to the ocean temperature changes in fractions of a Kelvin that were originally measured. NOAA’s conversion of the minuscule temperature change data to Zettajoules, combined with the exaggerated vertical aspect of the graph, has the effect of making a very small change in ocean temperature seem considerably more significant than it is. Converting the ocean heat content change back to temperature change reveals an interesting discrepancy between NOAA’s data and that of the ARGO system. Over the period of ARGO data, from 2004-2014, the NOAA data imply that the oceans are warming at 0.05 Cº decade–1, equivalent to 0.5 Cº century–1, or rather more than double the rate shown by ARGO. ARGO has the better-resolved dataset, but since the resolutions of all ocean datasets are very low one should treat all these results with caution. What one can say is that, on such evidence as these datasets are capable of providing, the difference between underlying warming rate of the ocean and that of the atmosphere is not statistically significant, suggesting that if the “missing heat” is hiding in the oceans it has magically found its way into the abyssal strata without managing to warm the upper strata on the way. On these data, too, there is no evidence of rapid or catastrophic ocean warming. Furthermore, to date no empirical, theoretical or numerical method, complex or simple, has yet successfully specified mechanistically either how the heat generated by anthropogenic greenhouse-gas enrichment of the atmosphere has reached the deep ocean without much altering the heat content of the intervening near-surface strata or how the heat from the bottom of the ocean may eventually re-emerge to perturb the near-surface climate conditions that are relevant to land-based life on Earth. Most ocean models used in performing coupled general-circulation model sensitivity runs simply cannot resolve most of the physical processes relevant for capturing heat uptake by the deep ocean. Ultimately, the second law of thermodynamics requires that any heat which may have accumulated in the deep ocean will dissipate via various diffusive processes. It is not plausible that any heat taken up by the deep ocean will suddenly warm the upper ocean and, via the upper ocean, the atmosphere. If the “deep heat” explanation for the hiatus in global warming were correct (and it is merely one among dozens that have been offered), then the complex models have failed to account for it correctly: otherwise, the growing discrepancy between the predicted and observed atmospheric warming rates would not have become as significant as it has. The UAH v. 6.0 dataset The long-awaited new version of the UAH dataset is here at last. The headline change is that the warming trend has fallen from 0.14 to 0.11 C° per decade since 1979. The UAH and RSS datasets are now very close to one another, and there is a clear difference between the warming rates shown by the satellite and terrestrial datasets. Roy Spencer’s website, drroyspencer.com, has an interesting explanation of the reasons for the change in the dataset. When I mentioned to him that the usual suspects would challenge the alterations that have been made to the dataset, he replied: “It is what it is.” In that one short sentence, true science is encapsulated. Below, Fig. T7 shows the two versions of the UAH dataset superimposed on one another. Fig. T8 plots the differences between the two versions. Fig. T7. The two UAH versions superimposed on one another. Fig. T8. Difference between UAH v. 6 and v. 5.6. Related Link:  It’s Official – There are now 66 excuses for Temp ‘pause’ – Updated list of 66 excuses for the 18-26 year ‘pause’ in global warming)

Study: Global warming has slowed

Statistically, it’s pretty unlikely that an 11-year hiatus in warming, like the one we saw at the start of this century, would occur if the underlying human-caused warming was progressing at a rate as fast as the most severe IPCC projections,’ Brown said. ‘Hiatus periods of 11 years or longer are more likely to occur under a middle-of-the-road scenario.’ Under the IPCC’s middle-of-the-road scenario, there was a 70 per cent likelihood that at least one hiatus lasting 11 years or longer would occur between 1993 and 2050, Brown said. ‘That matches up well with what we’re seeing.’ There’s no guarantee, however, that this rate of warming will remain steady in coming years, Li stressed. ‘Our analysis clearly shows that we shouldn’t expect the observed rates of warming to be constant. They can and do change.’ Read more: http://www.dailymail.co.uk/sciencetech/article-3052926/Our-climate-models-WRONG-Global-warming-slowed-recent-changes-natural-variability-says-study.html#ixzz3YFlZTdja Follow us: @MailOnline on Twitter | DailyMail on Facebook

Global Warming ‘Pause’ Continues — Temperature Standstill Lengthens to 18 years 4 months

El Niño or ñot, the Pause lengthens again Global temperature update: no warming for 18 years 4 months By Christopher Monckton of Brenchley Since December 1996 there has been no global warming at all (Fig. 1). This month’s RSS temperature – so far unaffected by the most persistent el Niño conditions of the present rather attenuated cycle – shows a new record length for the ever-Greater Pause: 18 years 4 months – and counting. This result rather surprises me. I’d expected even a weak el Niño to have more effect that this, but it is always possible that the temperature increase that usually accompanies an el Niño will come through after a lag of four or five months. On the other hand, Roy Spencer, at his always-to-the-point blog (drroyspencer.com), says: “We are probably past the point of reaching a new peak temperature anomaly from the current El Niño, suggesting it was rather weak.” I shall defer to the expert, with pleasure. For if la Niña conditions begin to cool the oceans in time, there could be quite some lengthening of the Pause just in time for the Paris world-government summit in December.   Figure 1. The least-squares linear-regression trend on the RSS satellite monthly global mean surface temperature anomaly dataset shows no global warming for 18 years 4 months since December 1996. The hiatus period of 18 years 4 months, or 220 months, is the farthest back one can go in the RSS satellite temperature record and still show a sub-zero trend. Given that the Paris summit is approaching and most “world leaders” are not being told the truth about the Pause, it would be a great help if readers were to do their best to let their national negotiators and politicians know that unexciting reality continues to diverge ever more spectacularly from the bizarre “settled-science” predictions on which Thermageddon was built. The divergence between the models’ predictions in 1990 (Fig. 2) and 2005 (Fig. 3), on the one hand, and the observed outturn, on the other, also continues to widen, and is now becoming a real embarrassment to the profiteers of doom – or would be, if the mainstream news media were actually to report the data rather than merely repeating the failed predictions of catastrophe. Figure 2. Near-term projections of warming at a rate equivalent to 2.8 [1.9, 4.2] K/century, made with “substantial confidence” in IPCC (1990), for the 303 months January 1990 to March 2015 (orange region and red trend line), vs. observed anomalies (dark blue) and trend (bright blue) at less than 1.4 K/century equivalent, taken as the mean of the RSS and UAH satellite monthly mean lower-troposphere temperature anomalies. Figure 3. Predicted temperature change, January 2005 to March 2015, at a rate equivalent to 1.7 [1.0, 2.3] Cº/century (orange zone with thick red best-estimate trend line), compared with the near-zero observed anomalies (dark blue) and real-world trend (bright blue), taken as the mean of the RSS and UAH satellite lower-troposphere temperature anomalies. The Technical Note has now been much expanded to take account of the fact that the oceans, according to the ARGO bathythermograph data, are scarcely warming. Key facts about global temperature Ø The RSS satellite dataset shows no global warming at all for 220 months from December 1996 to March 2014 – more than half the 435-month satellite record. Ø The global warming trend since 1900 is equivalent to 0.8 Cº per century. This is well within natural variability and may not have much to do with us. Ø Since 1950, when a human influence on global temperature first became theoretically possible, the global warming trend has been equivalent to below 1.2 Cº per century. Ø The fastest warming rate lasting ten years or more since 1950 occurred over the 33 years from 1974 to 2006. It was equivalent to 2.0 Cº per century. Ø In 1990, the IPCC’s mid-range prediction of near-term warming was equivalent to 2.8 Cº per century, higher by two-thirds than its current prediction of 1.7 Cº/century. Ø The global warming trend since 1990, when the IPCC wrote its first report, is equivalent to below 1.4 Cº per century – half of what the IPCC had then predicted. Ø Though the IPCC has cut its near-term warming prediction, it has not cut its high-end business as usual centennial warming prediction of 4.8 Cº warming to 2100. Ø The IPCC’s predicted 4.8 Cº warming by 2100 is well over twice the greatest rate of warming lasting more than ten years that has been measured since 1950. Ø The IPCC’s 4.8 Cº-by-2100 prediction is almost four times the observed real-world warming trend since we might in theory have begun influencing it in 1950. Ø The oceans, according to the 3600+ ARGO bathythermograph buoys, are warming at a rate equivalent to just 0.02 Cº per decade, or 0.2 Cº per century. Ø Recent extreme weather cannot be blamed on global warming, because there has not been any global warming to speak of. It is as simple as that.   Technical note Our latest topical graph shows the least-squares linear-regression trend on the RSS satellite monthly global mean lower-troposphere dataset for as far back as it is possible to go and still find a zero trend. The start-date is not “cherry-picked” so as to coincide with the temperature spike caused by the 1998 el Niño. Instead, it is calculated so as to find the longest period with a zero trend. The RSS dataset is arguably less unreliable than other datasets in that it shows the 1998 Great El Niño more clearly than all other datasets (though UAH runs it close). The Great el Niño, like its two predecessors in the past 300 years, caused widespread global coral bleaching, providing an independent verification that RSS is better able to capture such fluctuations without artificially filtering them out than other datasets. Besides, there is in practice little statistical difference between the RSS and other datasets over the 18-year period of the Great Pause. Terrestrial temperatures are measured by thermometers. Thermometers correctly sited in rural areas away from manmade heat sources show warming rates below those that are published. The satellite datasets are based on reference measurements made by the most accurate thermometers available – platinum resistance thermometers, which provide an independent verification of the temperature measurements by checking via spaceward mirrors the known temperature of the cosmic background radiation, which is 1% of the freezing point of water, or just 2.73 degrees above absolute zero. It was by measuring minuscule variations in the cosmic background radiation that the NASA anisotropy probe determined the age of the Universe: 13.82 billion years. The RSS graph (Fig. 1) is accurate. The data are lifted monthly straight from the RSS website. A computer algorithm reads them down from the text file, takes their mean and plots them automatically using an advanced routine that automatically adjusts the aspect ratio of the data window at both axes so as to show the data at maximum scale, for clarity. The latest monthly data point is visually inspected to ensure that it has been correctly positioned. The light blue trend line plotted across the dark blue spline-curve that shows the actual data is determined by the method of least-squares linear regression, which calculates the y-intercept and slope of the line. The IPCC and most other agencies use linear regression to determine global temperature trends. Professor Phil Jones of the University of East Anglia recommends it in one of the Climategate emails. The method is appropriate because global temperature records exhibit little auto-regression. Dr Stephen Farish, Professor of Epidemiological Statistics at the University of Melbourne, kindly verified the reliability of the algorithm that determines the trend on the graph and the correlation coefficient, which is very low because, though the data are highly variable, the trend is flat. RSS itself is now taking a serious interest in the length of the Great Pause. Dr Carl Mears, the senior research scientist at RSS, discusses it at remss.com/blog/recent-slowing-rise-global-temperatures. Dr Mears’ results are summarized in Fig. T1: Figure T1. Output of 33 IPCC models (turquoise) compared with measured RSS global temperature change (black), 1979-2014. The transient coolings caused by the volcanic eruptions of Chichón (1983) and Pinatubo (1991) are shown, as is the spike in warming caused by the great el Niño of 1998. Dr Mears writes: “The denialists like to assume that the cause for the model/observation discrepancy is some kind of problem with the fundamental model physics, and they pooh-pooh any other sort of explanation.  This leads them to conclude, very likely erroneously, that the long-term sensitivity of the climate is much less than is currently thought.” Dr Mears concedes the growing discrepancy between the RSS data and the models, but he alleges “cherry-picking” of the start-date for the global-temperature graph: “Recently, a number of articles in the mainstream press have pointed out that there appears to have been little or no change in globally averaged temperature over the last two decades.  Because of this, we are getting a lot of questions along the lines of ‘I saw this plot on a denialist web site.  Is this really your data?’  While some of these reports have ‘cherry-picked’ their end points to make their evidence seem even stronger, there is not much doubt that the rate of warming since the late 1990s is less than that predicted by most of the IPCC AR5 simulations of historical climate.  … The denialists really like to fit trends starting in 1997, so that the huge 1997-98 ENSO event is at the start of their time series, resulting in a linear fit with the smallest possible slope.” In fact, the spike in temperatures caused by the Great el Niño of 1998 is largely offset in the linear-trend calculation by two factors: the not dissimilar spike of the 2010 el Niño, and the sheer length of the Great Pause itself. Curiously, Dr Mears prefers the much-altered terrestrial datasets to the satellite datasets. However, over the entire length of the RSS and UAH series since 1979, the trends on the mean of the terrestrial datasets and on the mean of the satellite datasets are near-identical. Indeed, the UK Met Office uses the satellite record to calibrate its own terrestrial record. The length of the Great Pause in global warming, significant though it now is, is of less importance than the ever-growing discrepancy between the temperature trends predicted by models and the far less exciting real-world temperature change that has been observed. It remains possible that el Nino-like conditions may prevail this year, reducing the length of the Great Pause. However, the discrepancy between prediction and observation continues to widen. Sources of the IPCC projections in Figs. 2 and 3 IPCC’s First Assessment Report predicted that global temperature would rise by 1.0 [0.7, 1.5] Cº to 2025, equivalent to 2.8 [1.9, 4.2] Cº per century. The executive summary asked, “How much confidence do we have in our predictions?” IPCC pointed out some uncertainties (clouds, oceans, etc.), but concluded: “Nevertheless, … we have substantial confidence that models can predict at least the broad-scale features of climate change. … There are similarities between results from the coupled models using simple representations of the ocean and those using more sophisticated descriptions, and our understanding of such differences as do occur gives us some confidence in the results.” That “substantial confidence” was substantial over-confidence. For the rate of global warming since 1990 – the most important of the “broad-scale features of climate change” that the models were supposed to predict – is now below half what the IPCC had then predicted. In 1990, the IPCC said this: “Based on current models we predict: “under the IPCC Business-as-Usual (Scenario A) emissions of greenhouse gases, a rate of increase of global mean temperature during the next century of about 0.3 Cº per decade (with an uncertainty range of 0.2 Cº to 0.5 Cº per decade), this is greater than that seen over the past 10,000 years. This will result in a likely increase in global mean temperature of about 1 Cº above the present value by 2025 and 3 Cº before the end of the next century. The rise will not be steady because of the influence of other factors” (p. xii). Later, the IPCC said: “The numbers given below are based on high-resolution models, scaled to be consistent with our best estimate of global mean warming of 1.8 Cº by 2030. For values consistent with other estimates of global temperature rise, the numbers below should be reduced by 30% for the low estimate or increased by 50% for the high estimate” (p. xxiv). The orange region in Fig. 2 represents the IPCC’s less extreme medium-term Scenario-A estimate of near-term warming, i.e. 1.0 [0.7, 1.5] K by 2025, rather than its more extreme Scenario-A estimate, i.e. 1.8 [1.3, 3.7] K by 2030. Some try to say the IPCC did not predict the straight-line global warming rate that is shown in Figs. 2-3. In fact, however, the IPCC’s predicted global warming over so short a term as the 25 years from 1990 to the present are little different from a straight line (Fig. T2). Figure T2. Historical warming from 1850-1990, and predicted warming from 1990-2100 on the IPCC’s “business-as-usual” Scenario A (IPCC, 1990, p. xxii). Because this difference between a straight line and the slight uptick in the warming rate the IPCC predicted over the period 1990-2025 is so small, one can look at it another way. To reach the 1 K central estimate of warming since 1990 by 2025, there would have to be twice as much warming in the next ten years as there was in the last 25 years. That is not likely. Likewise, to reach 1.8 K by 2030, there would have to be four or five times as much warming in the next 15 years as there was in the last 25 years. That is still less likely. But is the Pause perhaps caused by the fact that CO2 emissions have not been rising anything like as fast as the IPCC’s “business-as-usual” Scenario A prediction in 1990? No: CO2 emissions have risen rather above the Scenario-A prediction (Fig. T3). Figure T3. CO2 emissions from fossil fuels, etc., in 2012, from Le Quéré et al. (2014), plotted against the chart of “man-made carbon dioxide emissions”, in billions of tonnes of carbon per year, from IPCC (1990). Plainly, therefore, CO2 emissions since 1990 have proven to be closer to Scenario A than to any other case, because for all the talk about CO2 emissions reduction the fact is that the rate of expansion of fossil-fuel burning in China, India, Indonesia, Brazil, etc., far outstrips the paltry reductions we have achieved in the West to date. True, methane concentration has not risen as predicted in 1990 (Fig. T4), for methane emissions, though largely uncontrolled, are simply not rising as the models had predicted, and the predictions were extravagantly baseless. The overall picture is clear. Scenario A is the emissions scenario from 1990 that is closest to the observed emissions outturn, and yet there has only been a third of a degree of global warming since 1990 – about half of what the IPCC had then predicted with what it called “substantial confidence”. Figure T4. Methane concentration as predicted in four IPCC Assessment Reports, together with (in black) the observed outturn, which is running along the bottom of the least prediction. This graph appeared in the pre-final draft of IPCC (2013), but had mysteriously been deleted from the final, published version, inferentially because the IPCC did not want to display such a plain comparison between absurdly exaggerated predictions and unexciting reality. To be precise, a quarter-century after 1990, the global-warming outturn to date – expressed as the least-squares linear-regression trend on the mean of the RSS and UAH monthly global mean surface temperature anomalies – is 0.35 Cº, equivalent to just 1.4 Cº/century, or a little below half of the central estimate of 0.70 Cº, equivalent to 2.8 Cº/century, that was predicted for Scenario A in IPCC (1990). The outturn is visibly well below even the least estimate. In 1990, the IPCC’s central prediction of the near-term warming rate was higher by two-thirds than its prediction is today. Then it was 2.8 C/century equivalent. Now it is just 1.7 Cº equivalent – and, as Fig. T5 shows, even that is proving to be a substantial exaggeration. Is the ocean warming? One frequently-discussed explanation for the Great Pause is that the coupled ocean-atmosphere system has continued to accumulate heat at approximately the rate predicted by the models, but that in recent decades the heat has been removed from the atmosphere by the ocean and, since globally the near-surface strata show far less warming than the models had predicted, it is hypothesized that what is called the “missing heat” has traveled to the little-measured abyssal strata below 2000 m, whence it may emerge at some future date. Actually, it is not known whether the ocean is warming: each of the 3600 automated ARGO bathythermograph buoys somehow has to cover 200,000 cubic kilometres of ocean – a 100,000-square-mile box more than 316 km square and 2 km deep. Plainly, the results on the basis of a resolution that sparse (which, as Willis Eschenbach puts it, is approximately the equivalent of trying to take a single temperature and salinity profile taken at a single point in Lake Superior less than once a year) are not going to be a lot better than guesswork. Fortunately, a long-standing bug in the ARGO data delivery system has now been fixed, so I am able to get the monthly global mean ocean temperature data – though ARGO seems not to have updated the dataset since December 2014. However, that gives us 11 full years of data. Results are plotted in Fig. T5. The ocean warming, if ARGO is right, is equivalent to just 0.02 Cº decade–1, or 0.2 Cº century–1 equivalent. Figure T5. The entire near-global ARGO 2 km ocean temperature dataset from January 2004 to December 2014 (black spline-curve), with the least-squares linear-regression trend calculated from the data by the author (green arrow). Finally, though the ARGO buoys measure ocean temperature change directly, before publication NOAA craftily converts the temperature change into zettajoules of ocean heat content change, which make the change seem a whole lot larger. The terrifying-sounding heat content change of 260 ZJ from 1970 to 2014 (Fig. T6) is equivalent to just 0.2 K/century of global warming. All those “Hiroshima bombs of heat” are a barely discernible pinprick. The ocean and its heat capacity are a lot bigger than some may realize. Figure T6. Ocean heat content change, 1957-2013, in Zettajoules from NOAA’s NODC Ocean Climate Lab: http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT, with the heat content values converted back to the ocean temperature changes in fractions of a Kelvin that were originally measured. NOAA’s conversion of the minuscule temperature change data to Zettajoules, combined with the exaggerated vertical aspect of the graph, has the effect of making a very small change in ocean temperature seem considerably more significant than it is. Converting the ocean heat content change back to temperature change reveals an interesting discrepancy between NOAA’s data and that of the ARGO system. Over the period of ARGO data, from 2004-2014, the NOAA data imply that the oceans are warming at 0.05 Cº decade–1, equivalent to 0.5 Cº century–1, or rather more than double the rate shown by ARGO. ARGO has the better-resolved dataset, but since the resolutions of all ocean datasets are very low one should treat all these results with caution. What one can say is that, on such evidence as these datasets are capable of providing, the difference between underlying warming rate of the ocean and that of the atmosphere is not statistically significant, suggesting that if the “missing heat” is hiding in the oceans it has magically found its way into the abyssal strata without managing to warm the upper strata on the way. On these data, too, there is no evidence of rapid or catastrophic ocean warming. Furthermore, to date no empirical, theoretical or numerical method, complex or simple, has yet successfully specified mechanistically either how the heat generated by anthropogenic greenhouse-gas enrichment of the atmosphere has reached the deep ocean without much altering the heat content of the intervening near-surface strata or how the heat from the bottom of the ocean may eventually re-emerge to perturb the near-surface climate conditions that are relevant to land-based life on Earth. Most ocean models used in performing coupled general-circulation model sensitivity runs simply cannot resolve most of the physical processes relevant for capturing heat uptake by the deep ocean. Ultimately, the second law of thermodynamics requires that any heat which may have accumulated in the deep ocean will dissipate via various diffusive processes. It is not plausible that any heat taken up by the deep ocean will suddenly warm the upper ocean and, via the upper ocean, the atmosphere. If the “deep heat” explanation for the hiatus in global warming were correct (and it is merely one among dozens that have been offered), then the complex models have failed to account for it correctly: otherwise, the growing discrepancy between the predicted and observed atmospheric warming rates would not have become as significant as it has. # Related Link:  It’s Official – There are now 66 excuses for Temp ‘pause’ – Updated list of 66 excuses for the 18-26 year ‘pause’ in global warming

For more results click below