“Socioeconomic Patterns in Climate Data” by Ross McKitrick and me (MN 2010) has just been accepted for publication by the Journal of Economic and Social Measurement. It can be accessed here. This paper is largely in response to Gavin Schmidt’s 2009 paper “Spurious Correlations…” (S09) that I have discussed earlier. S09 was published in the International Journal of Climatology (IJOC), which subsequently rejected an earlier version of MN2010. I was very happy to provide a bit of the work on this paper. In particular I did some analysis, some modeling, and helped a bit with the editing.
There is, as often seems to be the case in climate science, some heated discussion surrounding two distinct areas with our paper. First there is the question of whether we received a fair hearing in peer review from Journal of Climate. Second once again Gavin is saying that our conclusions are incorrect. I should add that he has done this without benefit of reading our actual paper, but it seems fairly clear that reading the paper will not change his mind.
For me there are two distinct fairness and good practice issues. First S09 was clearly a response on Ross’s earlier work. I’m sure this is too much to ask, but Gavin should have sent his paper to Ross for comments before publishing. It would have been the right thing to do scientifically, but I’m not sure how much this is about science. Failing Gavin doing that then IJOC certainly should have asked the author’s of the previous papers if they had comments. At the absolute minimum they should have offered space for responses in their publication. They didn’t do any of these things, and it doesn’t appear to me as if the reviewers of either S09 or MN2010 even read the predecessor papers. Second the objections to MN2010 from IJOC didn’t have to do with whether we were right. They had to do with whether they felt the predecessor papers were the right approach at all. But the problem is that they were different, and less specific, arguments than those in S09. The weird thing is that these comments weren’t themselves subject to peer review or response, so from IJOC’s perspective Gavin’s incorrect arguments were allowed to stand, because the reviewers had altogether different objections to Ross’s earlier work. In my opinion they should have asked us to submit a response rather than a paper in order to resolve the situation, but they didn’t.
In response to our paper Gavin is now making new technical arguments about why we are incorrect. The first argument is that he has drawn a graph that shows spatial autocorrelation (SAC) of the residuals. It is at least nice of him to acknowledge that the argument is S09 was incorrect, and that you need to look at the residuals. The problem is that he is still not doing any type of standard test for SAC. These are well known, and we have done those tests in our paper. This part is really amazing. I’m not an expert in this area, but back when I was looking at this I was able to quickly find a text on the subject and find these standard tests. Who would make a statistical argument without using the standard statistical tests in the literature? We have also shown the effect of allowing for SAC where necessary and that the results stand. So in my opinion that is what he needs to respond to. His second argument is that it is possible to see these types of correlations in a single instance of a GCM run. This will take a little more examining.
In S09 Gavin showed several GCM runs. Using those he showed that some economic variables were significant in the same regression. Since, of course, socioeconomic variables can’t be influencing a GCM this shows that these types of correlations are spurious. There are two problems. First, where they were significant the coefficients were very small, and of the opposite sign of those found with the real world climate data. Second, and rather ironically, if you allow for SAC they lose all significance, unlike those from real world climate data. In other words he managed to incorrectly argue that Ross’s earlier results were wrong because of SAC, and then make a flawed argument because he didn’t allow for SAC.
Now he is making a different argument, which is that if you do a whole bunch of GCM runs you will see a result exactly like Ross’s earlier work. The problem is that none of the runs in S09 look like that, and he isn’t producing any others. If he does then I guess we could take a look. Even if it does happen sometimes, and I guess it could as a matter of random outcomes, it would need to happen a lot for our conclusions to be incorrect. That is the whole idea of significance testing.
These results indicate urban heat island (UHI) and other measurement issues may be affecting the published long-term land temperature trends. I believe that this result is plausible given what is known about UHI and the lack of meta data for large portions of the world. The results also indicate that it is in fact areas where we have the least amount of meta data and the poorest records that are the most affected. Also remember that land makes up only one third of the Earth’s surface so even if there were a 50% error in land trends this would only be a 15% difference in the overall trend. Therefore this shouldn’t be an argument over the big picture. But people building models need accurate measurements of the various portions of the temperature trend, so they should be quite interested if corrections need to be made. The results of any one study aren’t definitive of course, but it should be taken seriously and additional work should be encouraged rather than huge amounts of energy and time being spent on spurious arguments trying to get rid of it.