The “Lack of Recent Warming” Crock: No Cigar, But Thanks for Playing!

April 24, 2012

Climate Crocks advisor Andrew Dessler alerted me to this new piece by John Nielsen-Gammon.
Per Dessler, “these graphics are some of the best I’ve seen to explain why the “lack of recent” warming is nothing of the kind.”

John Nielsen-Gammon is the Texas State Climatologist and a Professor of Atmospheric Sciences at Texas A&M University.  Viewers may remember him from my snapshot of the Great Texas drought.

Reposted with permission:

It’s common knowledge among those who follow such things that global temperatures have not gone up very much in the past several years.  This has caused many to believe that the recent lack of warming contradicts what climate models say should happen in response to the increasing Tyndall gases.  This, in turn, has provoked the counterargument that the Earth is still warming, just on a longer time scale, or that the recent period is too short to yield statistically significant results.

These counterarguments are not compelling.  Fundamentally, any change in global temperature, even if it’s just from one year to another, must have a cause.  Saying that we need to look at longer time scales denies the need to find the cause of the actual global temperature changes (or lack thereof) at shorter time scales.

Such causes have been sought, and a few papers have proposed various combinations of cloud cover, volcanic aerosols, the El Niño/Southern Oscillation (ENSO), deep ocean heat uptake, and so forth.  A recent paper I like by Foster and Rahmsdorf (discussed here and here) takes a statistical approach to attempt to eliminate the effect of the other known forcing mechanisms, and what’s left over is a fairly steady warming.  Others have noted, more casually, that 2011 was the warmest La Niña year on record.

I decided to take a simple approach at looking at the effect of ENSO.  Using GISTemp Land/Ocean Index values andNiño 3.4 values, I computed 12-month running averages of Niño 3.4 and compared them to the average GISTemp values at lags of 0, 3, and 6 months.  Foster and Rahmsdorf used a diferent ENSO index and found optimal lags between 2 and 5 months.  So one would guess that a 3-month lag would fit the data best in my case, and indeed it did.

The normal threshold for El Niño or La Niña, as applied by the Climate Prediction Center, is for five consecutive months of at least 0.5 C above or below normal in a key region of the tropical Pacific.  For working with annual data, I decided to call an annual average above 0.5 C an El Niño and an annual average below -0.5 C a La Niña.  Then I plotted it up, color-coding each year for whether it was El Niño, La Niña, or neither (neutral).  Here’s the result:

GISTemp global temperatures, 1951-2011

We see the latter half of the mid-century flat period, followed by the warming since 1970 and the relatively flat recent few years.  We also see a few years that were exceptionally cold and whose timing fits with the known injection of aerosols into the stratosphere by the mighty volcanic eruptions of Agung and Pinatubo.  It’s easy to see that both of these eruptions caused global temperatures to drop by about 0.3 C temporarily before recovering as the aerosols settled out of the stratosphere over the following 2-3 years.  Finally, we see that, as is well known, La Niña years tend to be globally cold years and El Niño years tend to be globally warm, with a global lag of three months as mentioned earlier.  And, we see that in a head-to-head match between El Niño and Pinatubo, Pinatubo wins.

To dig deeper, I’ll zoom in on the period since Agung.  This isolates the period of nearly steady warming since 1970 and lets us focus a bit more on what has happened since 1998 or so.  Here’s the chart:

GISTemp global temperatures from 1967 to present

Somehow, it no longer appears that global temperatures have leveled off in the past decade.  That is because, with the color coding according to the phase of ENSO, the eye is able to compare apples to apples: the upward long-term trend during El Niño years (red triangles) is plain, the upward long-term trend during neutral years (green squares) is plain, and the upward long-term trend during La Niña years (blue diamonds) is plain.

Stare hard enough, though, and you see that they have leveled off.  The last ten data points have little or no trend.  But we see that the lack of trend is at least partly due to the El Niño year near the beginning of the 10-year period and the two La Niña years near the end.

Let’s get quantitative about this.  In this case, with the temperature rise being nearly linear, it helps to add trendlines.  I’ve excluded the three Pinatubo years from the regressions.  Here’s the result:

GISTemp global temperatures, with trends for El Niño, neutral, and La Niña years computed separately. Pinatubo years are excluded.

There aren’t that many full-blown El Niño events, but they seem to be following a steady upward trend.  There are more La Niña events, and they too clearly follow a steady upward trend.  Finally, the many neutral years also so no sign of departing from a steady upward trend.  There’s enough scatter in the neutral years that if one had considered the period 1977-1987, or the period 1987-1997, one might be tempted to say that the neutral years had little or no warming.  But the past decade fits nicely with the long-term upward trend of 0.16 C/decade shown by all three time series.

The spacing between the lines is a good measure of the impact of El Niño and La Niña.  All else being equal, an El Niño year will average about 0.2 C warmer globally than a La Niña year.  Each new La Niña year will be about as warm as an El Niño year 13 years prior.

So we see a couple of recent La Niñas have caused the recent global temperature trend to level off.  But be honest: doesn’t it seem likely that, barring another major volcanic eruption, the next El Niño will cause global temperatures to break their previous record?  Doesn’t it appear that whatever has caused global temperatures to rise over the past four decades is still going strong?

So about that lack of warming:  Yes, it’s real.  You can thank La Niña.

As for whether this means that Tyndall gases are no longer having an impact: Nice try.


Below, Gammon was prominently featured in my post on drought conditions in Texas.


55 Responses to “The “Lack of Recent Warming” Crock: No Cigar, But Thanks for Playing!”

  1. omnologos Says:

    Nice. Will you alert Skeptical Science on the topic? They haven’t got the memo yet.

  2. witsendnj Says:

    It’s also quite likely that aerosols from pollution are masking the warming, especially considering the huge increase from Asia the last few years – in fact a rather terrifying prospect that if we did reduce emissions, temperatures would heat up far faster than they already are. Fascinating BBC documentary about global dimming:

    • otter17 Says:

      The aerosols are one piece of the climate change puzzle that NOBODY seems to know about (well at least among the lay people I talk to). Dr. Hansen and others are concerned too, considering the aerosols could be masking roughly 1/2 of the additional radiative forcing from the human greenhouse gas emissions. The “Glory” satellite was supposed to measure aerosol distribution and help pin down a more accurate figure for the radiative forcing masking worldwide, but the rocket carrying it crashed. I joke around with an engineer colleague of mine who used to work for Orbital Corporation, who built the rocket.

  3. Two denier guidelines re: the surface temperature record.

    1) Any short-term warming period is the result of poor station location, doctoring of data, etc.

    2) Any short-term cooling period is genuine — the “problems” in (1) don’t apply.

  4. Jeff MacLeod Says:

    caerbannog666 you forgot…

    3) Ignore any data or studies that go against your personal belief and accuse the producers of these items of being socialist, communists, scientists just in it for the money, governments and increased taxation, or whatever other reason you can think of.

    I’ve even seen a poster state that extra-dimensional beings who live in the moon are the cause of confusion.

  5. daveburton Says:

    This is a good (if unintentional) illustration of the problems with the temperature data. Don’t you think it bizarre when you see a graph that has 1998 as much cooler than two subsequent years? Maybe he should stop using Hansen’s damaged data. Here’s HADCRUT:

    Note that 1998 was much warmer than any subsequent year.

    And in other news… Delivering an unprecedented slap at NASA’s promotion of irrational climate alarmism, 49 former astronauts, scientists & engineers, including former Johnson Space Center Director Chris Kraft, and seven Apollo astronauts, have gone public with their objections to NASA’s politicization of science.

    • greenman3610 Says:

      I’ll give you the benefit of the doubt and assume this is ignorance rather than duplicity.
      Gammon uses GISS. HADCRUT3 which you cite, famously did not factor in the arctic, as GISS does, and as the new HADCRUT4 now does.
      see real climate explanation:
      “As expected, the changes (a little from both data sets) lead to a minor rearrangement in the ordering of ‘hottest years’. This is not climatologically very significant – the difference between 1998 and 2010 is in the hundredths of a degree, and most of the attribution work on recent climate changes is looking at longer term trends, not year to year variability. However, there is now consistency across the data sets that 2005 and 2010 likely topped 1998 as the warmest years in the instrumental record. Note that neither CRUTEM4 nor HadSST3 are yet being updated in real time – they only go to Dec 2010 – though that will be extended over the next few months.”

      as for your non climate science trained astronauts, their fearless Leader “Jack” Schmitt was the subject of my video exposing his bald faced lies.

    • otter17 Says:


      A vast majority of experts in the field of climate science as well as the National Academy of Sciences concludes that some form of emissions reduction plan is needed. Consider the possibility that these former astronauts and engineers may very well be the ones that are attempting to politicize science by using their credentials to sway the public’s perception of the science away from solution plans. Consider that these former NASA employees may dislike the proposed solutions for climate change mitigation, and are taking more of an activist position. This seems more consistent with their behavior, since they are promoting a letter of dissent to the media rather than taking the scientific route. Scientific knowledge and conclusions generally come from peer reviewed research and academy position statements that are based on the research.

      The National Academy of Science, American Academy for Advancement of Science, American Geophysical Union, practicing climate scientists, and the shear weight of the evidence provide far more credibility than a retired astronaut or engineer’s opinions.

  6. daveburton Says:

    Actually, it’s more due to the guessed Arctic numbers where there’s no actual data, and the “adjustments” that NASA makes to the data, not just in the far north (the Arctic, plus Iceland, etc.), but also in the Untied States, and even Australia.

    • greenman3610 Says:

      back track as you like, your misrepresentation reflects poorly.

      • suyts Says:

        Lol, that’s not back tracking. That’s true. GISS doesn’t have real thermometers where they are guessing Arctic and Antarctic temps. So HadCrut included imaginary temps just like GISS and what do you know? The data sets are pretty similar now. Surprise! I don’t know why GISS and Hadley are still separate….. they use the same data to come to the same conclusions…… wow. Shocking.

        If I and my friends agree upon the same imaginary numbers, we could add them and come to the same conclusions! Almost every time!

        • greenman3610 Says:

          It’s so weird that the ice cap is melting coincidentally at the same time they are making up those temperatures.
          Life is weird. Go figure.

          • g2-b31f1590b0e74a6d1af4639162aa7f3f Says:

            What’s really remarkable is that he posted that *after* I posted the following:

            1) Google Earth imagery showing station locations in the Arctic.

            2) A plot that shows that you can compute global-average temperature results that line up very closely with NASA’s even when you sample the entire globe much more sparsely than the Arctic is currently sampled.

            From (1) and (2), it should be obvious to any reasonable person that there is enough data to calculate average temperatures in the Arctic.

    • (The stuff below is an ugly copy-paste job from a vi terminal session — no guarantees as to how it will look)

      Folks, it wasn’t all that long ago that I supplied Burton with raw temperature data *and* computer source-code that produces results very similar to the results that NASA produces — *without* any of NASA’s “adjustments”.

      That’s right — the code I supplied him implements a very straightforward area-weighted averaging procedure that when applied to *raw* temperature data, replicates the NASA results very nicely.

      And here he is, *still* pushing lies about NASA’s global temperature work, lies that can easily be disproved by a competent programmer willing to put in a few days of “spare time” programming/analysis effort.

      To see just how dishonest and stupid the deniers’ attacks on NASA’s global-temperature computations have been, check out the following images that I generated from my code (with a little help from Google-Earth).

      To start off, here’s a Google-Earth image that shows the complete lack (that’s sarcasm, folks) of temperature stations in the Arctic: This plot shows most (but not all) of the temperature stations (visible from this perspective) used by NASA to generate its “meteoroligical stations” temperature results.

      And here’s another interesting plot that I generated from my code: It shows “Sparse Rural Stations” results that I computed from 85 rural temperature stations compared with the NASA/GISS “meteorological stations” index: RAW data, BTW.

      Now take a look at *this* Google-Earth image that shows the locations of those “Sparse Rural Stations”:

      You really don’t have to have a dense sampling of global temperatures to compute darned good globl-scale average results. So when someone complains about the supposed “lack of coverage” of the temperature network in the Arctic or other places, show them these results. I sampled the entire globe more sparsely than the supposedly poorly-covered Arctic is sampled and I still got good results.

      Although 85 stations were selected in total (at random) by my gridding procedure for the “Sparse Rural Stations” processiong, only about 50 stations reported data on average for any given month/year. The station selection procedure was: divide the Earth into grid-cells of approximately equal area, starting with 20deg x 20deg grid cells at the equator. Search each grid-cell for the station with the longest temperature record. Use one and only one station per grid cell. Compute the monthly-average temperature anomalies from the selected stations and just average them together for each year. That’s it — no cherry-picking involved. I can vary the grid sizes, station selection procedures, etc. and I still get results that look like this.

      Oh, and did I mention that I used only *RAW* temperature data?

      • otter17 Says:

        Good job. Keep trying to engage him, though generally deniers disengage to fight another day when you have them completely cornered like this. Monckton versus potholer is the most high profile recent example, but I have been able to get people to bail in some forum discussions. Oh well. Another day, another crock.

        • g2-b31f1590b0e74a6d1af4639162aa7f3f Says:

          Wow — an idiot actually voted down my post)

          Some followup comments before this thread rolls off the page…

          After hearing deniers just go on and on and on…. about how the indicated warming trend was the result of UHI, “data manipulation”, “dropped stations”, etc., I decided to download the raw GHCN temperature data and take a crack at it myself.

          I found that confirming NASA’s published results was much easier than I thought it would be. Modern (free/open-source) software development tools make projects like this much easier to tackle than they used to be. Got a crude “one off” program up and running surprisingly quickly. And then with a basic program working, modifying it to perform a number of “experiments”: like processing just rural stations, or choosing random stations, etc. etc. was really quite easy (often just a few lines of source-code to be added/changed).

          I found that a warming trend that closely matched NASA’s would pop out no matter what I did with the data. Choose just rural stations? Same warming trend. Choose just stations still actively reporting data (thereby eliminating the “dropped stations” issue)? Same warming trend. Choose just a few dozen stations scattered around the world (all rural)? Same warming trend.

          Process raw vs “adjusted” data? Very similar warming trends. (BTW, all the results that I have ever put up here or elsewhere were comput
          ed from *RAW* temperature data).

          Did the same with the CRU “climategate” raw temperature that that the CRU released last summer — same results.

          Generating the XML station lat/long data needed to generate the Google-Earth pix was slam-dunk easy.

          In fact, this turned out to be such a straightforward project that I’m kicking myself for not “getting a round tuit” two or three years ago! I could teach this stuff to second-semester computer programming students!

          In several discussion fora, I have supplied to deniers the data, code, and instructions describing how to reproduce my results — not once has one of them acknowledged that he/she may have been incorrect. They just take their conspiracy-mongering to the next level: “All the National Weather Services doctor manipulate their raw data to show warming”.

          I’ll never convince them with my results — but I hope that what I’ve done will convince others to point and laugh at them.

          • suyts Says:

            I rated that comment down because you cried about being rated down. That said, you need to look into what “raw” means. Then, you need to look at how rural is defined. I don’t mind people challenging things, but if you’re going to pretend to argue against the skeptical position, you should at least take the time to actually see what they’re stating. Otherwise, you’ve created a very nice strawman and deftly defeated it……. congratulations.

            But, getting back to the topic at hand, the argument here seems to be that ENSO has caused the lack of recent warming. What with the La Nina running wild and whatnot. Let’s forget for the moment that Nielsen-Gammon forgot the 2009-2010 El Nino….. or colored out of the picture for some reason ….. prolly ’cause 2010 wasn’t like really warm or anything like that which would alter his extrapolations or whatever. We can set that aside for now.

            But if one uses the argument that ENSO is causing the flattening/cooling, doesn’t one feel obliged to go back and look at ENSO numbers during the time period we saw most of the warmth increase? Say from 1970-1998? Well, you’re spiffy with numbers and graphs and whatnot….. give that one a whirl sparky.

          • otter17 Says:

            Are you serious? So supposedly in their minds, all the raw data has been manipulated, and not one person that records the data in the field has stepped up to say something? Hahaha, that is hilarious.

          • otter17 Says:

            Well, hilarious or very sad. Now that I think about it, sad, actually.

          • g2-b31f1590b0e74a6d1af4639162aa7f3f Says:


            A little tinfoilhattery for your reading pleasure:


            (Note the date on that post — since then, none of those “experts” has shown up to review my work).


            The rest of the thread contains more of the same….

          • suyts Says:

            Was it that you didn’t know, or were you trying to pretend that the “raw” data wasn’t already adjusted?

            This is why it is difficult to have a discussion. Aside from the fact I have a comment still awaiting moderation, which always happens on alarmist blogs. But I don’t know whether to be insulted, insult back, or try to patiently explain why you don’t know what you just did. But, I do know, most often people meet that information with resentment.

            On the comment not posted, I simply gave you a couple of hints as to why you’re “raw” data wasn’t “raw” after all, but, alas, it wasn’t to be. Good luck on sifting through the rest of it.

          • g2-b31f1590b0e74a6d1af4639162aa7f3f Says:


            You are nothing more than a typical loudmouthed, lazy, incompetent, know-nothing denier.

            The GHCN raw data is neither owned nor controlled by NOAA — it is gathered from the NWS agencies of countries all over the world and redistributed unaltered. Anyone who wants to analyze the GHCN data can choose the NOAA-adjusted data or the raw data taken directly from the NWS offices around the world. A competent programmer/analyst will soon discover that the raw and adjusted data-sets produce very similar global-average temperature results. And anyone who doubts the provenance of the raw data supplied by NOAA is perfectly free to contact the various NWS offices around the world and get the data directly from them.

            As for the “definition” of rural — I know perfectly well what “rural” means — and I verified the rural status of the stations I processed by examining their locations with Google Earth satellite imagery. You see, in addition to writing software to compute global-average temperatures, I also wrote code that converts lat/long station metadata into XML code that Google Earth can read. So it was not hard at all for me to “zoom in” and verify their rural status.

            Now, if you really want to challenge the validity of my results, then get off your lazy backside and produce your own global-temperature estimates. Go through the data, pick out some certified rural stations scattered around the globe (that you independently verify with Google Earth) and show us that the global-warming signal is not robust. But don’t get your hopes up — in addition to the results I linked to here, I generated a bunch of processing runs where I throw out a different 90% of the stations at random for each run. Got similar warming results for all of those runs.

            It’s quite telling that whenever I show results produced by raw data — data, btw, that you guys readily accept as “raw” when you use it to accuse NOAA of fraudulently manipulating its homogenized data — somehow magically gets redefined as “adjusted” by you guys whenever I use it to disprove your inane claims.

          • g2-b31f1590b0e74a6d1af4639162aa7f3f Says:

            Just following up to add — anyone who thinks that the NWS offices of the 180+ countries around the world have somehow “adjusted” their raw temperature data in such a way as to produce a similar global-warming signal no matter what small subset of stations you process is so delusional that the only reasonable response is to point and laugh.

            The “raw data actually has been adjusted” claim is just plain stupid and delusional — full stop. Nobody who makes such an idiotic claim deserves so much as a second’s worth of serious consideration.

          • suyts Says:


            So, reading comprehension isn’t your strong suit? I gave you the link that explained what was your “raw” data.

            Again, I’d like to thank you for proving me correct. It’s impossible to discuss these issues with people who won’t read the links, or understand what is being stated.

            As far as your results go, there’s no way I can challenge them unless you send me your code. I’d be happy to evaluate. I’m fairly well versed. As far as rural goes, I’m interested…. how did you determine how to weight the various “rural” locations? I suspect you’ve no idea what that means. Not all rural areas are the same, nor is their history. Now, tell me about lazy…. tell me that while I’m still working through how to weigh the various rural and urban areas, and you just buzzed right through them that I’m lazy. That’s fine….. tell me how you weighted them. Maybe I’m just slow and you’re so much quicker. Or maybe you don’t really know what you’re talking about. I quit coming to these type of sights a long time ago. I thought just for a minute that things may have changed. Thanks. You’ve shown me how they haven’t. Condescension without reason. Gotta love it!

          • g2-b31f1590b0e74a6d1af4639162aa7f3f Says:

            ’m fairly well versed. As far as rural goes, I’m interested…. how did you determine how to weight the various “rural” locations? I suspect you’ve no idea what that means

            I know full well what weighting means.

            I used standard area-weighting — gridding/averaging for full-up processing; for “sparse stations” processing, gridding wasn’t necessary because I selected a single rural station per grid cell. In that case, the processing reduced to a straight average of station anomalies, for the obvious reason.

            I did a bit of experimentation to see how “stripped down” the procedure could be and still produce results similar to the NASA “meteorological stations” index.

            I found that very large grid-cells worked well: 20 degrees x 20 degrees at the Equator, with longitude dimensions adjusted to keep grid-cell areas approximately constant as you move N/S from the equator. Simplifies processing by eliminating the need to interpolate to “empty” grid-cells, but still provides enough area-weighting to keep densly-sampled parts of the globe from dominating the averages.

            Full-up processing (all stations with enough data to compute 1951-1980 baselines) replicat
            es the NASA “meteorological stations” quite closely. Sparse processing (one station per grid-cell) *still* matches the NASA results well.

            The fact that such a simple implementation still replicates the much more sophisticated NA
            SA processing demonstrates the robustness of the temperature record (and the vacuousness o
            f the NASA critics).

            Code and instructions available at: All info on that page is publicly available to anyone with a facebook account.

        • g2-b31f1590b0e74a6d1af4639162aa7f3f Says:

          Following up: It should be pointed out that a number of other people have independently replicated the NASA/GISS temperature results using procedures far more sophisticated than what I coded up. These would include tamino ( as well as the folks who run the Clear Climate Code project.

          They have independently determined that UHI has virtually no impact on global-average results, that the “dropped stations” claim made by Anthony Watts and Co. is completely without merit, that raw and “homogenized” data produce very similar results, and that the NASA/GISS results can be replicated suprisingly closely with as few as 50-60 stations scattered around the world.

          I got the same results, even my processing is almost embarrassingly simple relative to NASA/Clear-Climate-Code/tamino/etc.

          The bottom line is, even techniques simple enough to be taught to college freshmen are sufficient to disprove all of the major denier claims about the NASA/GISS surface temperature results.

          And one more thing to think about: Did the NWS offices around the world also adjust their temperature data to fit the ENSO cycle?

          • suyts Says:

            g2, you’ve convinced me….. ENSO runs the temp record. 😐

            Following up with the other part of our conversation, I wasn’t clear…. my apologies.

            When I asked about the weighting, I meant about the weighting of rural to urban areas. As you know, some rural areas are more rural than others. For instance, a place with a pop density of 1 person/sqkm isn’t the same as 20/sqkm. Further, it would stand to reason that the effects on the thermometers would be different per development of that area. For instance, 20 people/sqkm in Mongolia would have a different effect on the area than 20 people/sqkm in Australia.

            Then there is the effect of population increase over time, or in some cases decrease.

            While I imagine you’d have an adverse reaction to the name, Dr. Spencer has worked quite a bit with these issues and has some interesting findings. You can go to his site or WUWT to see some of his work in that area.

          • g2-b31f1590b0e74a6d1af4639162aa7f3f Says:

            Regarding Spencer’s study — his “population adjusted” temperature results completely disagree with his satellite-based temperature trend results. That should raise all kinds of red flags about the validity of his methodology.

            Tamino addressed that point quite nicely:

            The temperature trend estimate over CONUS derived from Spencer’s own satellite data is on the order of 17 times what his “population adjusted” surface temperature trend indicates. A difference that is *extremely* statistically significant (95% range is 6.8 to 27 times that of Spencer’s “adjusted” trend).

            Furthermore, Spencer states, “We find that the warming trend with time increases rapidly with population at low population densities, then levels off at high population densities.”

            In that case, we should see substantially *more* global-average warming with data taken from rural stations vs. data taken from stations in large, long-established urban areas. But repeated analyses performed by different groups of people (both professional and amateurs) shows very similar global-average trends from rural vs. urban stations.

          • g2-b31f1590b0e74a6d1af4639162aa7f3f Says:

            Following up with Spencer’s “population density adjustments” — Spencer stepped in it “big time” with a silly goof that invalidates his thesis. He baselined his corrections re: a population density of 0 rather than the average population density of the USA. That is, a “population density correction” applied to a station in an area with average population density should, of course, be 0. But Spencer didn’t do that — he subtracted his “population density” correction from *all* stations, whether or not they were located in areas with above-average population densities.

            I looked at spencer’s methodology and it absolutely didn’t look right — couldn’t quite put my finger on the problem in the brief time that I looked at what he did. But Nick Stokes, someone who has done a lot of temp data analysis, zeroed right in on the problem over at Tamino’s place. And I quote:

            But that corrects to zero PD. And that’s not the situation in America. In fact, the PD of ConUS is about 40/sq km. And if you read that value off his
            regression, it corresponds to about 0.12 C/decade.

            On that basis his PDA corrected value would be 0.133C/decade, not 0.013.

          • suyts Says:

            g2, I wasn’t asserting Spencer had everything correct. I’m stating much more isn’t known than what we think we know. Spencer acknowledges further work needs to be done. I really don’t believe comparing satellite records with ground based records is an appropriate comparison, and I don’t believe that was the purpose of Spencer’s efforts.

            The purpose, I believe, is to pull the noise out of the surface temp record. While there are many attempting to do so, they seem woefully inadequate. GISS applies an algorithm which basically puts our historical temp records in a moving dynamic state. Hadley is on their 4th version. When considering both, we see this is an acknowledgment of inadequacy.

            No one is stating that we might not have generally warmed for the last 200 years or so, or that we didn’t warm in the 70s through the 90s. But, before you can affix causation to it, we must adequately extract the noise and spurious signals. Clearly this isn’t being done properly or there would be no need for a dynamic history or multiple versions of essentially the same data set.

            Satellite measurements would be one way to attempt this, but as we can see with UAH and RSS there’s a pretty significant differential to account for.

  7. […] The “Lack of Recent Warming” Crock: No Cigar, But Thanks for Playing! « Climate Denial Crock of… […]

  8. Nick Carter Says:

    Interesting conundrum on the elimination of sulphate aerosols and how that will cause SST’s to rise. Did I read correctly that Tom Wigley may have done a paper on this a few years ago? Also sobering, the intensity of this early season heat wave during ENSO neutral condx, and a modest solar max. I really wonder what will happen when we get our next big El Nino. Anyone read anything on these topics?


  9. Nick Carter Says:

    woops…didn’t mean SST’s, I meant earth surface temps….

  10. omnologos Says:

    Anybody caring about science around here? Anybody wondering how scientific is for RC to state “2005 and 2010 likely topped 1998 as the warmest years” after writing “the difference between 1998 and 2010 is in the hundredths of a degree”?

    Anybody aware of the fact that such a “difference” means that, scientifically speaking, there is no difference? And so “likely topped” is likely garbage?

    Talk about a “climate crock”!

    • dana1981 Says:

      It’s the deniers who make the ‘1998 was the hottest year’ argument. Those who live in glass houses shouldn’t throw stones.

      • omnologos Says:

        interesting dana…so if you find anybody making an argument you find baseless, you justify turning it around to build an equally baseless argument?

        and before you ask…I know what SkS says on this topic and we know it’s better than what RC wrote.

    • suyts Says:

      Omn, I can do you one better….. here’s a climate crock…. they say ENSO is the reason why we’re seeing a flattening/cooling. Ok, then wouldn’t it also be the cause of the warming seen from 1970-1998?
      Here is ENSO 3.4 quantified over that period.

      Shouldn’t these people actually check this stuff before writing about it?

      And, yes, 100ths of degrees….. something to wet ourselves about.

      • g2-b31f1590b0e74a6d1af4639162aa7f3f Says:


        Note how those who are eager to trash the global temperature record as unreliable are also eager to tell everyone how well it lines up with ENSO.

        • suyts Says:

          Read the post. I’m not positing that ENSO runs the temp records, Nielsen-Gammon is. I’m stating that it can’t be both ways. If ENSO is the cause for the lack of warming then it must also be the cause of the warming.

          Personally, I believe ENSO, in part, is a function of global energy, and it serves as a gathering and releasing mechanism. But, hey, who am I to argue with Nielsen-Gammon? If he insists that’s what’s controlling our temps then there we go….. a nice reasonable explanation. Now, don’t forget, Foster and Rahmsdorf make a similar posit as well. Don’t tell me you’re a science denier!?! You’re not are you?

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: