Saturday, December 19, 2009

Global Climate Deal

Well this hardly needs me to say it, but what a farce. Did it really take thousands of people showing up for two weeks in Copenhagen to decide that we ought to limit future temperature increases to 2C? This could have been done by email. While they were at it why didn't they just agree that all future hurricanes, tornadoes, floods and droughts would be canceled? To make this happen they are adding an appendix of what everyone is agreeing to, which will not be binding and in any event wouldn't get close to limiting emissions enough to meet the temperature goal.

Of course a six year old could have predicted this outcome. Since all these world leaders aren't that stupid they must be incredibly cynical to let this circus continue. Whoever thinks these negotiations are going anywhere doesn't seem to grasp that there is no we. It is not in the interest of any one country or block of countries to do this, so it simply won't get done. They can meet until the cows come home and it still won't get done.

And as an aside where in the world did these payments to developing countries get to be a main topic. I'm sure it is a very nice thing to do, and all those leaders like Mugabe are looking forward to their new cars and airplanes, but in what way does that lower planetary emissions? It sure looks like guilt money to me. And it has the advantage of being something that the leaders of the developed countries could possibly do, probably by just redirecting other aid.


The only way we could possibly limit emissions to the levels needed would be by shutting down trade with the developing nations until they would agree to stringent limits on emissions produced by their exports. Trying to do this would likely destabilize the planet politically, which could lead to war. Read up on the origins of world war two, both in Europe and Asia. Now that everyone has nukes this cure might be worse than the disease, but who knows. In any event I don't think anyone has the stomach for it. Especially since the developed countries probably don't have the will to put up with the economic hardships that would be required to switch off of fossil fuels fast enough anyway.

So the only possible answer is a massive effort in new low carbon energy technology along with figuring out some way to suck all this extra carbon out of the atmosphere. The sooner we realize that "joint self control" is a failed strategy, the sooner we can get on with doing something useful.

Monday, December 7, 2009

My Part in A Global Climate Deal

After reading about the bold proposals from China and India in the weeks leading up to the Copenhagen summit I realized that my family needed to act in concert to reduce the threat to our planet. Just like these great nations we need to do more than just talk about the issue we need to take a stand and make some firm commitments. As a result I have decided that by 2020 the Nierenberg family will reduce its carbon intensity by 50% compared to 1990 levels. Now I realize that this is even a larger commitment than China and India have made, and I also realize that they have picked a much later base year, but this issue is so important that I am willing to go the extra mile.

Carbon intensity is measured as the amount of Carbon emitted divided by the amount of output produced. In the case of China and India the output is defined as GDP. In my case the output is defined as income. I report that income each year to the IRS. You might be concerned that I would deliberately overstate my income, and use that to meet my targeted intensity goal. After all the higher my income the more carbon I can output under this proposal. But the IRS requires that I pay a substantial tax on my income so I have little incentive to do that. In the case of China and India there is no such penalty, but as they are national governments I'm sure that their reporting of GDP will be completely accurate.

On the output side all three of us will have to be on the honor system. There is really no way to tell exactly how much carbon any of us are producing. But we will all make a good faith effort to record our activities so that we can make an accurate calculation.

Now you may object that the US as a whole needs to reduce its total output of carbon, and that while my proposal reflects a significant effort on my part it won't do enough to help the entire country get to the kind of reductions that are necessary. I think in saying this you aren't taking into account the fact that many families in the US output more carbon than we do, and have been over a long period of time. After all my grandparents arrived in this country only at the beginning of the twentieth century, and since they had relatively small families the total number of their descendants is minuscule relative to families that arrived earlier and had much more time to procreate. My grandparents were also very poor for much of the early century so their relative output was small. Just as an example look at families like the Kennedys, and the Rockefellers. It is clear to me that those families bear the responsibility of cutting their carbon output long before relatively small and new families like ours should have to.

Itt is true that it is very likely that just for reasons of general efficiency, and economy, and because of the overall growth in my income, I was likely to make or even exceed these goals even before Copenhagen. But why should I be penalized for all the great work that I have been doing?

Of course I have some concerns about what will happen if we can't meet this ambitious goal. After all look what happened to the European countries that failed to meet their Kyoto commitments. But I am willing to take the risk knowing full well that India and China face the same risks that I am taking.

So I hope this adds to the political consensus that is building, and I am very happy to have done my part.

Thursday, December 3, 2009

The Gift that Keeps on Giving

There were a flurry of news articles recently covering a survey study by the Scientific Committee on Antarctic Research. The BBC was typical with this headline "Major Sea Level Rise, as Antarctic Melts." The subhead of the article uses a 1.4M high end for the rise in sea level.

The article starts with the IPCC projections of 28-43 cm, which isn't the full range in any event. Says that the IPCC report concluded that this was "almost certainly too low", which is wrong, and then goes on with quotes from the director of the survey.

Colin Summerhayes, I guess sensing that the rest of the report's conclusions are dull, discusses the Antarctic contribution to sea level rise. He says it could add "tens of centimeters." So you might wonder how do get from tens of centimeters to a meter of additional rise. Well you have to add in melt from other areas. What is his basis for doing that. Well he doesn't have one. Maybe his feeling.

So, of course rather than trying to figure this out from a news story I went to the actual report. And what you learn when you do that is that it is even an attempt to make a new sea level estimate. The 1.4M figure is taken from my old friend Rahmstorf 2007. I wonder if they get a free shopping day for being the one millionth citation to this weakly argued paper.

The BBC says that many other studies support these conclusions but fail to mention any. And I might add that the only http://www.scar.org/publications/occasionals/ACCE_Addendum_01Dec09.pdf to the SCAR paper makes it clear that the conclusion is entirely based on the Rahmstorf paper. It would seem to be pretty embarrassing that one of the main conclusions that you talk about from your study group is based on a single paper from three years earlier.

Some might fall back on the conclusion that the West Antarctic Ice Sheet would contribute tens of centimeters. This led them to look for the Rahmstorf study since perhaps it was so different than the IPCC report. But this seems to be based on the myth that the IPCC report didn't account for glacial melt in Antarctica. Section 10.6.3 fully discusses this contribution. There is no attempt in this study to reconcile their conclusions with the conclusions of the IPCC chapter.

Reading the SCAR release you can't blame this one on the press. They make the 1.4M meter forecast one of their main bullet points, even though they are just referencing a single three year old paper. I once again am tired of having to do forensic analysis on just about everything published by the scientific community to figure out what they really studied and what they are really concluding.

Friday, November 27, 2009

The Copenhagen Diagnosis is Disappointing

Note: I am very pleased to have received comments on this post, and present my responses to those comments at the end of this blog post.

In the lead up to the Copenhagen talks a group of climate scientists has put out a publication called The Copenhagen Diagnosis. For me it is a disappointing document because the scientists aren't doing what they do best, which is present science. Instead they take a chapter out of the bad skeptic book, and present evidence in a one sided and misleading way. My feeling is that they are learning the wrong lesson from everything that has happened.

The fundamental problem is that by doing things this way they are completely opening themselves up to criticism that will, to the average person, seem like the science is wrong, when it is their presentation of the science that is wrong. This causes the general public to question the whole foundation of the science on the climate issue. For people like me it causes me to read every publication, and statement from this group with complete skepticism when I would prefer to not have to dig into everything to see whether it makes sense.

I'm not going to go through the entire document. Instead I will focus on two sections where I have spent some time understanding the issues; Sea Level, and Sea Ice.

Sea Level

On page 39 of the report they make two summary statements about sea level. Which I will summarize. First sea level is rising faster than than the best estimate of the IPCC Third Assessment Report (TAR), and second that sea level rise is likely to be twice as large as the IPCC AR4 (AR4) based on a new ice sheet understanding.

Why did they choose to compare sea level rise with the Third Assessment Report? Why not the first or second, or better yet the one that was just completed AR4? Is it because most people who are reading quickly would assume that they were comparing the last few years to the most recent assessment report instead of an old one? After all they go on to say that the most recent report is seriously underestimating future sea level rise.

Even if we accept that somehow the TAR is the right baseline instead of AR4 there are problems with the graph they present. On the graph they show a gray range which they label as "IPCC Projections." They don't show anything that is a "best estimate." Looking at figure 24 from the Third Assessment Report it is a little
difficult to tell what they have used to create this range, but artistically that report had a gray range. If you use a magnifying glass and do some interpolation it might have the same high and low values as the range in their graph. But in the IPCC report that range is labeled as "the range of the average of AOGCMs for all thirty five SRES scenarios." On the Third Assessment graph there is a much wider range that is labeled "the range of all AOGCMs and scenarios including uncertainty in land-ice changes, permafrost changes and sediment deposition.

Neither of these are labeled as a "best estimate." And in fact even on their own graph the observations are at the top end of the range, not outside the range.

To make matters worse if you look at the graph you can see that observations are well above the center of the gray region by 1995. But the Third Assessment Report was published in 2001. Did the authors of the TAR really intend that sea level already be at the top of their range in a period years ahead of when the report was published? Or is this more likely just mixing apples and oranges with a comparison of General Circulation Models (GCMs) and observations. I don't think the GCMs were ever intended to be compared with observations in this manner. And the fact that the graph in the TAR doesn't include observations would back up this conclusion.

Looking to the future, what is the basis for their statement that sea level is likely to rise twice is fast as predicted in AR4? AR4 was published in 2007, so something dramatic must have happened for the conclusions of an entire chapter to be overturned. We would expect to find some serious and widely accepted peer reviewed studies that show that the AR4 consensus chapter was seriously flawed. Instead they base their conclusion on three things. First that the AR4 didn't include dynamic processes. Second that a single paper by Stefan Rahmstorf predicts higher values. And third the results of the "Delta Committee." These are really poor arguments.

On the issue of dynamic processes, it is true that the AR4 left dynamic processes out of their core range. But they go on to discuss what effect they would have. In figure 10.7 it includes an additional .1 to .2M for "Scaled-up ice sheet discharge." So in the judgment of the chapter authors, if in fact there is scaled up discharge from Greenland and Antarctica they would add .1 to .2M to their estimates. This is a long way from doubling their estimates. The Copenhagen authors present no evidence of any kind peer reviewed or other that this is incorrect.

Their second leg is the Rahmstorf 2007 paper that uses a simple linear model to forecast future sea level rises based on temperature. There are a lot of problems with that model as I have discussed in other blog posts, and as were discussed in published comments by two different groups following the publication of the paper. Even if you think that paper is interesting, it hardly can be used to overturn the consensus work of the AR4 authors who are actually experts in the field of sea level rise. (By the way they also list WBGU 2006 as a reference, but as this refers to published work, and previous IPCC reports it can hardly be used to claim that new research has overturned the AR4 prediction.)

Finally they reference the "Delta Committee." This was a government committee in the Netherlands which gave recommendations to the Dutch government on how much sea level rise to plan for. This link fills in some details about the committee, which was not a scientific assessment, and once again certainly isn't a reasonable case for saying that the estimates of the AR4 were wrong.

In summary their presentation of the past is misleading, and their statement about likely future sea level rise is not grounded in the peer reviewed literature. If they wanted to say it was their opinion that would be fine. But the way it is written a casual reader wouldn't get that message.

Sea Ice

In their summary on Sea Ice on page 31 they state that the melt in the Arctic has been larger than forecast by the AR4. This statement is true, but there are three problems. First there have been very few data points. One could just as easily say that global temperature has been well below the forecasts of the AR4. It has been pointed out repeatedly that this would not be proof of anything. Second they fail to mention in the summary that Antarctic sea ice has been above those same forecasts, although they cover the topic later in the chapter. Finally they fail to mention a recent publication by Shindell that explains that warming in the Arctic has been enhanced by black soot, which likely has been a factor in the amount of Arctic sea ice melt.

To complete the imbalance they show a graph of Arctic sea ice, but no graph of Antarctic sea ice. The Arctic sea ice graph is again a little strange. They label the "prediction" in the graph as "mean and range of IPCC models." We can see that the observations have been diverging downward, and have left the range in recent years. (Note that chose not to plot the one year recovery in 2009 even though they refer to it in the figure description.) But looking closely the divergence appears to begin in 1975. When were these models created? Once again do I believe that a model created circa 2006 was starting to be so wrong in 1975? In AR4 they don't include observations in figure 10.13 leading me to believe that you can't simply compare the observations with those models without great care in centering etc.

On the subject of the Antarctic they do devote a page to discussing the increase in sea ice. But pretty much the page is designed to explain it away because of circulation, ozone, and the fact that the ice is shrinking in some areas. I'm sure those are all good reasons, just like the black soot is a factor accelerating Arctic ice melt, which they don't mention. But to be balanced they should have presented a graph comparing Antarctic sea ice with the IPCC AR4 projections from figure 10.13. This would show a very large divergence, but on the high side.


Summary


The AR4 is a well written balanced document. I might have some objections here and there, but I learn a lot from reading it, and it gives me a good picture of where the consensus lies on climate change. When scientists stray from this type of generally balanced document, as they have with the Copenhagen Diagnosis, it makes me have to question everything that they are writing and saying, which defeats the purpose.

Response to Comments

Catherine,

I will start with the choice of report. Your premise is that they chose the TAR because there was a projection that covers the current date, while AR4 doesn't have such a projection. That is a good point, but I don't think it is quite as meaningful as you state.

The problem is that while the graph in the TAR is labeled "Sea Level Rise" it is actually a graph of absolute sea level. The graph in the CD is also absolute sea level not rate of rise. To determine the rate of sea level rise you would need to determine the slope of each of the lines in the graph. This is where you could possibly find the 1.9mm/year figure that they use although it is not sourced. If they had wanted to source a figure of rate of rise, which of course wouldn't start at zero, and then compare rate of rise during the period that would have been clear. My comments about being above the range long before 2001 have to do with absolute sea level as pictured in their graph, and since I don't think anyone was suspecting sea level would drop significantly during that period it seems unlikely that it was meant to be compared to measurements.

(After I wrote this I reflected that for me at least that graph would have been more interesting. But I think for most people two sets of nearly horizontal sea level figures with the TAR figure moving up towards the measured rate of rise wouldn't have made their rhetorical point. Especially since the rate has been decreasing in recent years, see below.)

Now let's take a look at the actual TAR graph. Between 1990 and 2010 the center of the upper and lower range, and the gray range for that matter, goes from zero to approximately .03m or 3cm. This produces an average slope over that period of 1.5mm/year, although it strains my eyes to do it. The period that they quote is 1993 to 2008 so since the curve slopes upward this might produce the 1.9mm/year figure.

They emphasize that the measured rate rate of 3.4 is 80% higher than the 1.9 figure. I have to admit that I got caught up with their graph and the ranges presented in it so I missed that point. However this recent rate of sea level rise was well understood by the authors of the chapter in AR4, and was completely taken into account in their revised sea level estimates. And even so the AR4 range is substantially similar to the TAR, as the helpfully point out, but for a different reason. So while the points they make about sea level are each technically correct, and I never said they weren't, they tend to mislead the reader into believing that the recent measured level of sea level rise would indicate that the consensus range in AR4 are likely to be understated, and it means no such thing.

You go on to say that they could have made it more dramatic by using the AR4 models, but this seems to contradict your statement that they don't cover the relevant period, so I'm not sure what you mean by that.

Your comments about the best estimate being the center are certainly consistent with other cases. But as I have pointed out it doesn't really matter which center you use, at least in this short time frame. Computing the slope using the top bar, which eventually leads to a .8M sea level rise, the recent rise in sea level is within the range, which because of the style of presentation is correctly pictured in their graph.

Catherine please add any other thoughts you would like, you caused me to look at the data a little differently, although as the scientists always like to say, my main conclusions are not affected. :-)



Anonymous,

Your post starts off well enough but then devolves into the typical sort of insults that seem to hurled around this issue constantly. Nevertheless I will answer your comment.

There have been several papers recently discussing mass loss from both Antarctica and Greenland. Particularly of note is that the GRACE satellite is showing that Antarctica may be contributing to sea level rise while the AR4 models centered around the idea that it would absorbing mass and therefore slowing sea level rise.

Nevertheless, as I pointed out in my original post, these type of ice sheet dynamics were discussed in AR4. They have provided the range that they would add for this type of thing as an additional .1 to .2 meters per year. I know of no published research changing the core conclusions on sea level rise as a result of these measurements, which in fact at current levels would be trivial. Trust me if those reports existed the authors of the CD would have referenced them.

In addition based on recent research (1) sea level rise is currently decelerating over the short term. So even if you believe that there is increased contribution from the ice sheets it is being offset from other factors. Remember that 3.4 mm/year would only yield .34m of sea level rise over the century, so there has to be an acceleration just to reach the midpoint of the estimates.



(1) A new assessment of the error budget of global mean sea level rate estimated by satellite altimetry over 1993–2008
M. Ablain1, A. Cazenave2, G. Valladeau1, and S. Guinehut1

Wednesday, November 25, 2009

Follow up on Rahmstorf 2007

I had thought in my last post that it was quite evident that the model in Rahmstorf 2007 was not well specified. In fact I was more interested in the issues dealing with the response to comments than I was in making the point clearly I suppose.

In that post I showed that the first half of the data did not do a good job of predicting the second half of the data. In fact the coefficient for the second period has half the value of the first period which would produce wildly different results for future predictions. But this by itself doesn't show the obvious which is that the linear model just doesn't work even without looking at out of sample prediction.

Here is a web page that presents the methodology for whether a linear regression is well specified.

http://www.duke.edu/~rnau/testing.htm

Remember that I am doing this against the final calculations with corrections from the corrigedums by Rahmstorf.

First I plotted predicted values against the actual values, and in fact they are not symmetrically distributed around either the diagonal or horizontal line. You can try it yourself from the code I already posted. But since there is no fixed rule for what symmetric means, my experience is that this will not be sufficient to make my point.
ocorrelat
So then I computed the Durbin Watson statistic for autocorrelation in the results.

http://en.wikipedia.org/wiki/Durbin%E2%80%93Watson_statistic

The result is .4 which according the Wikipedia page puts it in the range where it "might be cause for alarm."

The point is that the residuals are not well scattered, and they are highly autocorrelated. This should be enough for anyone to see that even a first year statistics student would know that the model isn't well specified.


In response to a comment here is the plot of the actual versus predicted values.

Here is a plot of the residuals. It doesn't take a DW statistic to see how highly autocorrelated they are.

Monday, April 13, 2009

Published Comments on Rahmstorf 2007

After Rahmstorf 2007 (R07) was published in science there were two comments published which were each critical of the results. One comment was by Torben Schmith, Søren Johansen, and Peter Thejll (Schmith et al). The other was by Simon Holgate, Svetlana Jevrejeva, Philip Woodworth, and Simon Brewer. (Holgate et al). Dr. Rahmstorf wrote a reply (RR07) to both of these comments. A year later he followed up with a technical correction to the reply. I will touch briefly on Schmith et al. But focus on Holgate et al. In this post I will show that the response to Holgate et al. was not nearly as robust as presented. My conclusion is the same as Holgate et al. "...we do not agree that simplistic projections of the nature presented in [R07] substantially contribute to our understanding of the uncertainties in the nonlinear relationships of the climate system."

Schmith essentially said that due to the fact that there is a trend in both the temperature and sea level series that it violated the "basic assumptions of the statistical methods used." Remarkably they didn't comment on the smoothing. Even more importantly they ignored the fact that the residuals show a very high level of autocorrelation further showing that the model is mis-specified. Even so they concluded that the likelihood of the model was overstated. In RR07 Dr. Rahmstorf attempted to show that even with the trends removed a good fit remained, but this hardly addresses the main points which stand. The rest of the this post will show that the point is largely moot anyway.

The point that Holgate et al. made was simple. R07 didn't test to see if it could predict withheld data by modeling on the rest of the data. There are two problems with their presentation that confused the point when Dr. Rahmstorf responded. First they never duplicated his original result, instead using an SSA algorithm of their own devising. Second they used a different method to build the models. Using their own methods they were unable to predict the second half of the data set using the first half, and of course likewise in reverse. It is true that Dr. Rahmstorf didn't publish his code until he wrote RR07, but it seems like the should have sent him an email so they could start from the same place.

(I also note that at the beginning of his response to comments Dr. Rahmstorf said he was making "the computer code used in the analysis available for use by other researchers." What he neglects to mention is that the computer code he supplied was essentially useless without ssatrend.m. And nothing in his source code indicates where you could find that. The rest of his code is just simple regression and plotting, not much use to other researchers.)

Of course in RR07 Dr. Rahmstorf went back to his original method and reported that he could predict the second half using the first half of the data, and likewise the first half using the second half. In making his response he failed to report certain important points, and more importantly he made a significant error. A year later in October 2008 Science published his "technical correction" for that error, but as I will show that technical correction also fails.

The question is does the first half of the data predict the second half. Or, from my point of view, put more simply would a model built from the first half of the data be similar to a model built from the second half.

To evaluate this I regressed a model using the first twelve of the twenty-four five year bins. In this case the intercept is the same as the full model, but the coefficient is .44 versus .34 in the model built from the full data set. (RR07 reports .42, I'm not sure why the difference but it doesn't really matter. Also it incorrectly reports this as .42mm/year/degree but it is actually .42 cm/year/degree.) Using the second twelve of the five year bins the coefficient is only .24. (RR07 doesn't report this.) So simply put a model based on the first part of the period would have predicted nearly twice the sensitivity to temperature change that was experienced in the second set. RR07 shows graphs that it claims shows that it is making useful predictions, but based on this analysis it seems like a pretty poor match to me.

I've thrown some R code into the bottom of the source file so that the reader can project 2100 sea level with the different coefficients. But suffice it to say it makes a huge difference whether the coefficient is .42 or .24.

In any event I wonder whether the reader of these two posts has noticed the error in this methodology? The problem is that prior to being put into the bins the data had already been smoothed by the ssatrend algorithm with a 15 year window. This means that some of the original data from the second period is actually influencing the smoothed data in the first period. I don't know how Dr. Rahmstorf discovered this but as I said in October 2008 he published a technical correction. In the technical correction he noted the problem;

"This is correct, but it was illustrated by an incorrect figure (Fig. 1),in which the first half of the smoothed sea-level curve (1882 to 1941) was used to predict the sea level for 1942 to 2001. Because the smoothing procedure used a 15-year time window, the smoothed sea-level curve up to 1941 effectively contains sea-level information up to 1948. When this error is corrected and only annual sea-level measurements from 1882 to 1941 are used, the obtained fitgives a sea-level slope of 0.35 mm/year per °C"

(.35 mm/year/°C should, of course, be .35cm/year/°C)

This, he helpfully pointed out, was almost equal to the full trend of the original model even more strongly demonstrating how well a model from the first period predicted the rest of the data.

He failed to note three things. First in the response he said that he trained the data using the period 1880-1940. In the technical correction this was deftly changed to 1882-1941. It turns out the result is highly sensitive to this choice. Second based on my work I have determined that he changed the window size to 10 years from 15. Third using this exact technique the coefficient in the second period is only .22 this is unreported in the technical correction and is still much different than the trend from the first period.

In the data sets supplied by RR07 the sea level data begins in 1870, but the temperature data begins in 1880. As I've said before it isn't clear why he chose to bin the data at all, since it doesn't make any difference to the result, but the way that he did the binning in R07 starts the analysis in 1882. You can only find this out from the code, as R07 says it is using the period 1880-2001. I note that it doesn't change the results of the initial analysis.

In RR07 he says; "...but using only the first half of the data set (1880 to 1940) for deriving the statistical fit." RR07 supplied the code for R07 but not the code for the analysis in the response. This doesn't really matter because as I have duplicated he got the results in RR07 by using the approach of just regressing the first twelve bins. (This effectively started in 1882)

I have duplicated the .35 figure reported in the technical correction. To do this you have to use exactly 1882-1941. You also have to reduce the window to 10 years from 15, this is unreported in the technical correction. The result is non-robust to changes in either the exact year range or the window. If you move the window one year back this lowers the first period coefficient to .26. If you move it one year forward it raises the coefficient to .41. Combining the fact that the first and second periods don't match, to the fact that trivial changes in date ranges and window selections make large changes in the results shows that this model is not well specified. It is certainly not useful for updating sea level predictions beyond the results of AR4.

As a closing note on this analysis I want to say that it may not have been intentional, but R07, RR07, and the technical note were not nearly transparent enough. Instead it seems that they were written to make a point. It also points out, once again, the need for complete code disclosure.

Code for this analysis can be found here.

Saturday, April 11, 2009

Duplicating Rahmstorf 2007

I had quite an interesting time replicating the calculations from "A Semi-Empirical Approach to Projecting Future Sea-Level Rise" (Science 1/19/2007 R07) by Stefan Rahmstorf. The general concept in the paper is very simple. He estimates a linear equation for the rate of sea level rise based on global temperature from measured data. The difficulty came in because the paper relies on an algorithm that is not generally available. To demonstrate this it seems that although there were two critical comments on his paper published in Science neither replicated the original calculations of R07. It only became possible to replicate the results when Dr. Rahmstorf published his code as part of his response to those comments. Even then you had to do a little hunting. In this post I will show that I have duplicated those results, so that my subsequent comments make sense.

R07 is quite simple in concept. His theory is that the rate of sea level rise has a linear correlation with surface temperature. The idea is that you start with a situation where sea level is stable. Then you raise the temperature causing sea level to rise until is reaches a new equilibrium. I won't explain the whole paper, but this makes perfect sense. There are, of course, multiple inputs to sea level rise, but pretty much all of them should respond to a change in surface temperature, the question is how fast and how much.

To estimate this linear equation he uses sea level data from Church and White (IPCC), as well as the GISS global temperature record. But this is where things get a little tricky. He doesn't do a simple regression on the temperature and sea level because this data is "noisy." Instead runs both the temperature and the sea level data through a process to separate the trend from the noise. This isn't explained in the main text of the paper, but it is referred to in the text below figures 2 and 3. "Both temperature andsea-level curves were smoothed by computing nonlinear trend lines, with an embedding period of 15 years (14)." He uses the word "smoothed" but this is not a typical smoothing algorithm. Use of a typical polynomial fit will not get as "good" a result, as I discovered in early attempts.

I might have discovered this more quickly if I had read all the way to the bottom of Dr. Rahmstorf's reply to comments where a link to the code and data was published. The code and data weren't linked from the original paper which is where I started. Both published comments also didn't have the benefit of this code and data which is clear from reading them, and I guess after that everyone called it a day. But I will get to this stuff in a later post.

Reference 14 from the paper is "J. C. Moore, A. Grinsted, S. Jevrejeva, Eos 86, 226 (2005)." This paper is titled "New Tools for Analyzing Time Series Relationships and Trends." It is actually a short review article of several new mathematical techniques. At first glance it certainly isn't immediately apparent that this tells us how the "smoothing" was done. But there is a section titled "Nonlinear Trends in Sea Level and Temperature." This section refers to the use of Single Spectrum Analysis (SSA) to extract a nonlinear trend. It refers to Ghil et. al. 2002. So this is at best an indirect reference.

At this point I still hadn't discovered the code at the bottom of the reply, but I did have a lead. So I looked into SSA and discovered software at UCLA that was available. To make a long story short SSA is necessary but not sufficient to duplicate the R07 results. In fact further searches, and eventually the code led me to the fact that R07 relied on a matlab function called ssatrend.m. Dr. Rahmstorf on Real Climate indicated that this was available from Aslak Grinsted.

I wrote Dr. Grinsted who wrote me back very promptly, and sent me the source code to ssatrend.m. He also commented that he had no idea how Dr. Rahmstorf had gotten a copy of it, and that he had never meant for it to be distributed. I think that he was just concerned about it being unsupported. My own view is that it is pretty strange to use some random piece of code in a published paper without making the code your own. Especially where, as in this case, the strength of ther result depends on using this specific algorithm. It isn't available from any of the usual Matlab repositories which is the first place I looked.

In any event with a little effort I found an SSA algorithm written in R on source forge. I checked its results agains the UCLA program and they were identical so I knew the foundation was good. Then with some effort, I translated ssatrend.m into R so that I could run it on an open platform.

Finally I translated the code supplied in the reply written by Dr. Rahmstorf into R so that I could at see if I was starting from the same place. This was successful.

Using the supplied input files for sea level and temperature I get a correlation coefficient (R) of .88 exactly as reported in R07. In addition the following graphs are identical to figures two and three (top) of R07. The code is here.

At the end of the day the use of this unpublished algorithm made this much harder than it should have been as the underlying concept is very simple. I have no idea how a reviewer could have evaluated whether the use of this algorithm made sense. Having said that I don't really see any problem with it, and I look forward to trying ssatrend with other data.

But now that I can get the exact results from the paper, I have a couple of comments that go beyond what has already been written in Science.

New consensus on sea level rise?

Recently because of some posts on stoat I have become interested in how the consensus is moving on projected sea level rise. There are clearly some scientists who now believe that 1M above 1990 by 2100 is likely. These include Stefan Rahmstorf, and Aslak Grinsted who have published papers based on empirical analysis to come to this conclusion. There are a number of issues with the paper by Dr. Rahmstorf some of which were published as comments in Science. I will post some additional comments which follow the publishing of the code and data. The paper by Dr. Grinsted is more interesting to me, but because of the time periods he uses to train and test his model much of the input data and assumptions are necessarily pretty fuzzy.

In AR4 the consensus estimates for sea level rise are on page 821 figure 10.33. Table 10.7 on the previous page breaks down the components. The range for the various scenarios including the error bars is roughly .2 meters to .6 meters. Many people seem to feel that they just threw up their hands at faster ice sheet discharge but table 10.33 includes figures for that at the bottom. They declined to add these into the projections because they couldn't assess the likelihood of this happening. It is important to note that these would have only added .09 to .17 meters to the high end of the range bringing the top to about .8 meters. (It would have slightly lowered the bottom of the range as well.)

So I wonder how far the consensus has actually changed in the last couple of years. Or are there simply some scientists who believe the consensus is low? Should the best estimate now be considered 1M? Or should we stay with the IPCC conclusions? Of course I don't attend the conferences with the types of experts who are called on to determine the consensus view. But I note that it doesn't seem to me that any of the lead authors of Chapter 10 are authors of these recent studies. (I could easily be wrong as cross checking that type of thing is difficult.)

Anyway I am going to write a couple of posts looking at these empirical papers. I am also looking at building an empirical model of my own to see if it will improve on the results of R07.

Saturday, March 14, 2009

Troposhpere Temps from Satellites and Surface Temps

In an earlier post I took at look at MM07 versus S09. One of the interesting results was that the rate of post secondary education (PSE) in a country was inversely proportional to the rate of temperature increase in that country. Taking a deeper look I saw that if you take the top quartile of grid points based on PSE from the analysis that the surface warmed more slowly than the troposphere, while in the rest of the grid points the surface warmed much faster than the troposphere. (This is using HadCRUT for the surface and RSS for the troposphere.) I suppose you could shrug this off to coincidence except that according to the Model E data supplied by Dr. Schmidt the troposhpere is supposed to be generally warming faster than the surface everywhere.

Over the 440 grid cells of the analysis the Model E troposphere warmed faster than the surface (.16 degrees per decade versus .14) This contrasts with the measured data from HadCRUT and RSS where the surface warmed faster than the troposphere (.27 versus .23).

Looking at the segment with high PSE we can start with the US. Now I know there has been a lot of sniping about the US temperature network, but I'm guessing that it is really pretty good. Out of the 440 grid points 52 are in the US. For the these grid points the troposhpere is warming faster than the surface (.26 versus .24). There are 85 grid points in the top quartile outside the US. For those points the troposphere is also warming faster than the surface (.21 versus .19).

Now the truth is that this data is very convenient for me to look at because it was already layed out by others, and I haven't looked at any other time periods to confirm that this isn't some kind of fluke.

But I think it is pretty interesting that in the countries that probably have the best surface temperature networks the actual measurements are in line with the theory as proposed by the results of Model E. The conclusion would be that perhaps climate scientists ought to be focused on troposhperic temperatures as measured by satellite, and reduce their dependence on ground based measurements. Switching to satellite measurement seems to be happening elsewhere with a good example being sea level rise.

I should add that they ought to be noticing this type of agreement with models and be pleased with the vindication, but I don't sense that they are. I have a theory as to why.

When the satellite temperatures were first introduced by UAH they were used by climate skeptics to show that there was no warming. In addition Drs. Christy and Spencer aligned themselves to some degree with the skeptics camp. Even though subsequent events have corrected the satellite trends and there now is an independent satelitte measurement from RSS this seems to have put satellite measurements out of favor in this area. This is particularly true for the UAH data.

In fact you can get a hint of this from S09. At one point Dr. Schmidt comments that the differing regression results he got by using RSS versus UAH might be caused by the "higher trend" in RSS. In fact this is uncited and he provides no results to back this up. I think he just assumed it was true, because a quick test of the trends show that for this set of grid cells over this period RSS and UAH have identical average trends. The point being that Dr. Schmidt believed so strongly that of course UAH would show less warming than RSS that he didn't even test the conclusion.

I think it would be quite interesting if the climate modeling community would look at their results relative to the troposhperic measurements from RSS/UAH and deemphasize the surface network. There is plenty of warming in the satellite measurements, and they may be a whole lot more accurate

Follow Up on ERA-Interim from ECMWF

After looking at this post, Ryan Maue suggested that I do a further analysis of humidity trends using the ERA Interim data set of ECMWF. This is the most up to date information, although it covers a much shorter period than the ERA 40 data I used in the earlier analysis. The results are that over the 19 year period from 1989 to 2007 there are no significant trends in specific humidity (q) in this data set. Current theory would say that q should be increasing. Thus between the ERA 40 data, and the ERA Interim data there is no confirmation of the theory, and in fact many of the trends are negative, particularly over the NH mid latitudes, but not significant.

The data for this study is from the ERA interim data set downloaded from the ECMWF servers.

The results can be found here.

Wednesday, March 11, 2009

Humidity Trends from ECMWF

There was an interesting post on Climate Audit about a paper covering trends in atmospheric humidity. The substance of the post was discussing why a paper by Gareth Paltridge and others was rejected by the journal of climate. But I was quite interested to learn that there was long term trend data on humidity.

As most people who follow the issue understand a doubling of CO2 by itself should increase atmospheric temperature about 1 degree C at equilibrium. The big question is what feedback is caused by this temperature increase. The largest feedback is caused by an increase in water vapor as the temperature increases. The theory is that relative humidity (r) should remain constant, which would mean that as temperature increases specific humidity (q) would increase. This means that the total water vapor of the atmosphere would increase causing an increased forcing.

Over time I have seen a few papers which have confirmed that relative humidity is indeed staying constant. But these papers have had a lot of caveats and have covered limited areas and time frames. So I was interested in the paper by Dr. Paltridge which used a re-analysis data set called NCEP covering a 35 year period from 1973 to 2007.

I don't have access to the paper but the summary is that in this data set r decreased over the period in some relevant regions. This would be counter to theory, and interesting.

Ryan Maue, who is a student at Florida State made several comments and here on CA pointing out issues with the Paltridge paper. Essentially the objection is that prior to 1979 the data is based on a small radiosonde network, and that the subsequent data is based on a combination of satellites and radiosondes. He felt that at the minimum the results should be compared to other re-analysis data sets. He chose ECMWF as the best. He even commented that he would take a look himself.

So I thought I would go ahead and see if I could figure out how to download the ECMWF data and do a quick analysis. It turns out to be quite possible.

The ECMWF data covers parts of 46 years, but only 44 complete years from 1958 to 2001. So I downloaded the r and q data for all grid locations for those years. The results have something for everyone I suppose.

For completeness I started out looking at the trends for the entire period and the entire globe. In this case q is only negative at the very highest altitudes above 100hPa. Below 500hPa q is positive. R on the other hand is negative above 400hPa, and positive below 925hPa with the altitudes in between not being significant. Again according to theory the theory the trend in r is supposed to be zero and the trend in q is supposed to be positive.

But I get the impression that the global figure over this time period is not the most interesting. As Mr. Maue points out most of the radiosondes are in a small band in the Northern Hemisphere. So the trends that cover that area are called out both by Dr. Paltridge and By Mr. Maue in a subsequent post.

Looking at the NH results for the entire period q is significantly negative all the way down to 700mb. It only becomes significantly positive at 925hPa. In a result I don't completely understand r is negative above 400hPa and insignificant below that. I would have thought that in a warming atmosphere that if q was negative r would have to be negative as well. In any event the negative q trend over the NH which is where the majority of the real measurements would have been made in this time period seems to be different than theory and in line with the results from NCEP. Note that I tried two definitions of the Northern mid latitudes with no change in results.

In the SH the trends are largely positive for both q and insignificant for r which would be in line with theory. And this is true for the entire mid latitude and tropic region, which has negative r only for the altitudes above 400hPa.

In summary then over the 44 year period the area that shows negative q seems to be the mid latitudes of the Northern Hemisphere. Since the areas outside of the measurement regions are computed using climate models I'm not sure of the relationship of the "real" data to areas where there were no radiosondes.

I took a separate look at the "post satellite" period. Unfortunately this is a very short time in this data set since it ends in 2001 unlike the NCEP data which goes through 2007. The trends were not particularly significant over this period.

The R code for this analysis can be found here.

The data for this post is from ERA-40 and was graciously supplied by the ECMWF data server.

Tuesday, March 3, 2009

IPCC Sea Ice Forecasts

There has been a lot of discussion in recent years of the large drop in Arctic sea ice. People who are particularly concerned about CO2 induced global warming have pointed to this drop as clear current evidence that the effects of warming are accelerating. People who are generally skeptical about CO2 induced global warming have brought up the fact that Antarctic sea ice levels appear to be increasing as a counter argument. It has always seemed logical to me that global sea ice level was probably the most complete measure in this area, but I had read several informal documents that suggested that the Arctic was the true signature test for CO2 induced global warming.

In particular earlier this year a blogger wrote a post noting that global sea ice was at the same level as 1979. This generated a response from "Cryosphere Today." The response was that while the information was correct it was referring to the global trend and not on the Arctic trend. They point out that the reduced sea ice in the Arctic was being compensated for by increased ice in the Antarctic. The response includes the following. "In the context of climate change, GLOBAL sea ice area may not be the most relevant indicator. Almost all global climate models project a decrease in the Northern Hemisphere sea ice area over the next several decades under increasing greenhouse gas scenarios. But, the same model responses of the Southern Hemisphere sea ice are less certain."

I had heard statements like this before, but I had never looked to see what the basis of the statements might be. So I went to the AR4 WG1 report. On page 770 you can find chapter 10.3.3.1 "Changes in Sea Ice Cover." In that section you can find the following. "In 20th and 21st-century simulations, antarctic sea ice cover is projected to decrease more slowly than in the Arctic (Figures 10.13c,d and 10.14)." I have copied figure 10.13 below.

The top two projections are for the Arctic, and the bottom two are for the Antarctic. The black line represents the ensemble mean for the various scenarios and models. What is obvious from looking at this picture is that while the Arctic sea ice is projected to decrease more rapidly, Antarctic sea ice is projected to decrease as well. Not only that but the decrease should have been occurring fairly steadily in the last few decades, and increasing at this point.

In that section, which it is true is quite short, I found no reference to higher uncertainty regarding Antarctic sea ice. The final paragraph of the section just refers to the general uncertainty of projections including the amount of climate change in the polar regions in general.

So what evidence did CT have for their assertion? They link to a single 2005 study discussing a simulation where increased snow fall in the Antarctic tends to preserve sea ice.

So the consensus view as established by the IPCC is that while the Arctic is expected to decline more rapidly we should see declining sea ice at both poles. This means that increased Antarctic sea ice over the last 30 years is not in line with these projections. Does this prove that increasing CO2 does not cause global warming? Not even a little. But it does mean that looking at global sea ice level is relevant, and that focusing exclusively on Arctic sea ice might reasonably be considered cherry picking.

Wednesday, February 25, 2009

S09, MM07 and Spatial Autocorrelation

S09 suggests that spatial autocorrelation caused spurious results in MM07. In addition S09 points out that using the RSS data the significance of the MM07 results decreases. S09 also demonstrates that spurious correlations can be shown by regressing economic statistics against model driven temperature data. My conclusion is that using standard techniques of spatial analysis these findings as they relate to MM07 appear to be incorrect. There may in fact be other issues with MM07 but I am unconvinced that the arguments in S09 disprove MM07.

As a caveat I have only recently taken a look at spatial analysis. In addition I am hardly an expert in statistics. But the methods for doing this seem straight forward, and it is well implemented in the R language, using the spdep package. There are a number of options to choose on the various functions. I have played around with these without the results changing. I have not obtained any unreported results counter to my conclusions, but readers are free to try other options of course. Once again I have published all my code, and the locations of the data. I am wide open to suggestions and criticism.

It is well understood that the test for spatial autocorrelation in a regression is whether the residuals are spatially correlated. The most common test for this seems to be Moran’s I test. Moran’s I test yields a value between –1 and 1, which indicates the amount and sign of correlation. It also yields a p-value, which indicates whether the correlation is significant.
Running Moran’s I on the main UAH regression the statistic is .01 with a p-value of .24. This shows an insignificant and very small positive spatial correlation. I actually ran the test using three distance weightings schemes 1/x, 1/x^2, and 1/sqrt(x). All showed similar results. I think 1/x is the most standard and I will report the rest of the results in this post using that weight scheme.

The conclusion unsurprisingly is the same as Dr. McKitrick’s follow up paper (unpublished, but available on his website) dealing with spatial autocorrelation of his results.

As a point of interest I ran the Moran’s I test on the results of the regression using the RSS data as discussed in S09. A bit surprisingly this shows more signs of spatial autocorrelation at .029 and significance just below the 95% confidence level at p-value= .053. Later I will show the results of running both the UAH and RSS regressions with regression estimators that take into account spatial autocorrelation.

Turning to the model data I duplicated the results in S09 by running a regression using the modeled tropospheric and surface data. Exactly as in S09 various economic variables showed significance in the regression, although the coefficients are very small.

Running the Moran’s I test on this result showed that the residuals are significantly correlated with location. The statistic is .06 with a p-value < .01. Thus as hypothesized in S09 the spurious results of this regression appear to be caused by spatial autocorrelation. However as I have shown the fact that this regression gets spurious results in no way means that the MM07 results are spurious as shown by the Moran’s I test.

Another interesting result, which is only discussed briefly in MM07 and was not discussed in S09 is that a regression of the surface data without tropospheric data also shows significance for the economic variables. This result would tend to minimize the concern about the choice of satellite data introduced in S09. I duplicated this regression result, but the Moran’s I test shows very significant autocorrelation in the residuals. The statistic is .17 with a p-value < .01. So for the moment at least the regression result isn’t meaningful.

The lagsarlm function in spdep is a regression estimator that takes spatial autocorrelation into account. It yields significance factors for the variables, as well as p-value showing the significance of the overall result. As a start I ran an estimate on the model described above which doesn’t use the tropospheric data. The resulting estimate shows that several of the economic data are significant (all but x). In addition the overall p-value is < .01. Using the Model E data I get an estimate where the coefficients are extremely small but they are significant for several (e,x,p,m). However the p-value of the result is .38 and so the overall result isn’t significant. This then shows that the test is working since we wouldn’t expect the result to be significant as the economic data could have no effect on the model.

Running the UAH model through lagsarlm I get a similar result to the original regression. In addition the p-value is .02 showing that the result is significant. So the UAH model in MM07 holds up under both tests.

Finally I ran the RSS model from S09 . Taking spatial autocorrelation into account raises the significance of the economic variables. Now g, e, p, m, y, and c are all significant. In addition the p-value for the overall regression is .018. Absent any other information then I would conclude that MM07 is demonstrating that surface temperature trends are affected by a set of economic variables.

This is hardly surprising since there are all sorts of ways that this could happen. Urban heat island is only one reason along with various large scale surface changes. I don’t know the significance of this but the Model E data shows the troposphere trend at .16 degrees per decade with the surface at .14 degrees per decade. This is in contrast to the measured data with surface at .30 and the troposphere (both UAH and RSS) at .24. In the model the surface is heating ten percent less than the troposphere, but as measured in the real world the surface is heating 25% faster. Of course all kinds of things could cause this discrepancy including issues with the model data, and with the measurements of either the troposphere or the surface. But it is ironic that this roughly 30% swing is in line with MM07s estimate of the bias in the global surface temperature trend.

Test results can be found here. An ever growing code file can be found here.

Tuesday, February 17, 2009

Update to MM07 and S07 Analysis

[It is 2/19/2009 and the conclusions of this post have changed. I'm not sure what I did before but I now get a somewhat different result than when I first put this up. ]As I was writing my previous post about MM07 and S09 Dr. Schmidt was responding to me at realclimate. Dr. Schmidt says that he picked Top/Right of the four choices which would be the same as MM07. I'm sure he is right, but the results look more like one of the two bottom choices from my analysis.

Anyway the point isn't that important because Dr. Schmidt agreed with me that the best way was to average the four boxes to match the 5x5 degree surface grid. He recalculated the result and put it in his supplementary information. He reported no significant change in results. I actually get a somewhat different result with the regression using RSS data yielding significant correlations with the economic factors.

I have rerun the UAH and RSS analysis using the 5x5 grid, and the results are below. The UAH result matches MM07, but for me the RSS result doesn't completely match S09. In the case of the RSS case some of the significance to economic factors drop, but not as much as stated in S09. I don't know what is causing the difference, as the process for converting the RSS set to temperature trends isn't documented in S09.

[update later on 2/19] As I was looking back at the S09 SI I realized that S09 used an updated surface temperature set in addition to using the RSS data. The RSS data alone doesn't really change the results from MM07 but the two together just push the correlations for some of the economic data above the .05 significance level. I show the results both ways in an attached PDF. The result with the updated surface data and the RSS data looks like S09.

I note that the updated surface data still gets essentially the same results with the 5x5 UAH data.

I have to say that until I understand why the RSS data has such a different standard deviation than both the UAH trop data and the surface data I'm not sure that it is valid to conclude there is an issue with MM07 from the fact that the model doesn't work well with the RSS data. As I mentioned in my previous post the model works well without any tropospheric data at all.

Meanwhile the issue of spatial correlation keeps coming up, so I guess I will look into that as well.

Updated code is here.

Here are the results using RSS and UAH and the 5x5 grid.


A look at MM07 and S09

The publication “Spurious correlations between recent warming and indices of
local economic activity” (S09) renewed my interest in one of the papers discussed “Quantifying the influence of anthropogenic surface processes and
inhomogeneities on gridded global climate data” (MM07). The topic of the accuracy of the surface record seems to come up quite a bit. MM07 demonstrated correlations between various measures of economic activity in a geographic area and the level of measured surface warming. S09 showed that the results of MM07 depended on the choice of satellite measurement. It also demonstrated that some correlations could be found between economic data and climate model results. This would need to be spurious since the climate models are independent of economic factors.

What is interesting about the result of my work so far is the influence on results of algorithm choices. Neither paper explained the details of a key step in their data processing, although through an email exchange Dr. McKitrick pointed me to some old software from his 2005 paper that shows the choice that he made. I think I will show that the choice in this step is not obvious, but it could possibly change at least part of the conclusions of both papers.


In MM07 they looked at the relationship between the trends in satellite measured tropospheric temperatures versus surface station measured temperatures at various locations. The idea was that the satellite record would largely be uninfluenced by economic factors and would serve as a baseline for the surface temperature trends. I note that in the paper they also discuss the fact that the satellite record appears to be somewhat influenced by local economic factors. I noticed this as well in a simplistic look, which I will describe later.

Reproducing the core MM07 data required reading through a STATA script and converting the steps to R as there are details that weren’t present in their paper. This is a great example of why having the code available is useful even if it is in a different language. I would have found it difficult to scale the economic data correctly, and to select the correct rows without this information. Maybe it would be obvious to others.

Once I got passed a little learning curve with R (this is my first time), I was able to quickly reproduce the regression results of both papers using the satellite data. I haven’t yet looked at the results reported in S09 where the output of a climate model was used.

The final data set is 440 rows of latitude and longitude locations with the decadal surface trend from HadCrut, the tropospheric trend from UAH, and a set of climate and economic factors. I will call this the “global” table.

In MM07 they only used the UAH data stating that they didn’t believe that the choice of satellite data sets would matter. Given how easy it would be to do, I think they should have also tested the RSS data. Dr. Schmidt obviously agreed that it should have been tested and created a parallel trend table using RSS. Using the trend calculated and supplied by Dr. Schmidt many of the correlations in MM07 disappear.

I note that in his blog post he says that he also tested updated UAH data as well as different surface temperature sets. The results of those tests are not reported in his paper, but I assume that they were similar to the results in MM07.

The fact that substituting RSS data for UAH data caused a change in the outcome was surprising to me. After many adjustments over the years by each it was my impression that they were very similar. I assumed that this was particularly true on decadal time scales.

In S09 Dr. Schmidt speculated that the difference in results was due to a higher trend in the RSS data. Looking at the supplied data for the 440 points in the global table I noted that the mean trend was identical between RSS and UAH at (.237K/Dec vs .232K/Dec.), so I’m not sure why he wrote that.

What was different between the RSS data and UAH data was the standard deviation. The RSS data had a standard deviation of .13 versus .19 for UAH. This is a pretty big difference relative to the observed values when both are supposed to measuring the same thing.

I had never seen anything written about this type of difference in the two data sets. As a result I wanted to look at the two data sets more closely to try to understand the difference.

As a step to doing this I decided to try to recreate the 440 rows used in both MM07 and S09 using the monthly anomaly data provided by UAH and RSS. It was then that I realized that I had a problem. Looking at the lat and long data in the rows of the global table I realized that they didn’t match the grids in either the UAH or the RSS products.

UAH and RSS produce data on a 2.5x2.5 degree grid of discrete points. For example in the UAH data set that I used latitude begins at -88.75 degrees North and in 2.5 degree increments continues to 88.75. Longitude begins at -178.75 degrees East and continues in 2.5 degree increments to 178.75.

Just as an example, in the first row of the global table the latitude is 52.5, and the longitude is 2.5. That point doesn’t appear on the satellite grid. In fact neither the latitude nor the longitude appears. So the question is which satellite grid cell do I pick in order to calculate the trend? There are in fact four grid cells which surround that data point. I can pick any of them. (Top Left , Top Right, Bottom Left, or Bottom Right)

It also occurred to me that you could average the trend across all four cells, and that might be the “correct” answer since it is my understanding that the surface data is on a 5x5 grid. I haven’t tried this yet, but I intend to do that to see if anything interesting happens. Dr. Schmidt reports that he has done this and that the results are essentially the same as in S09. I want to try it just for fun anyway. (Update 2/17 I have done this and the results are here.)

It seemed to me that the choice between the four was potentially important even given the fact that there is a lot of spatial correlation, and it fact it turned out to affect the results. There is no indication in either paper or supplied supplementary information of which choice was made. Looking at the code pointed to by Dr. McKitrick in his email it looks like he picked “Top/Right” back in 2005 when he created the data set. Dr. Schmidt didn’t respond to a question asking which choice was made in S09. (Update on 2/17 Dr. Schmidt responded and said he had picked Top/Right as well)

Looking at the UAH data (Top/Left) and (Top/Right) give results that look a lot like MM07. They are not identical but I am using updated UAH data, and it is possible that the trend calculations in R are not the same as in STATA. If I use (Bottom/Left) or (Bottom/Right) then the results aren’t as good, but there are still correlations to the economic variables.

In the case of the RSS data (Top/Left) and (Top/Right) give results that are fairly similar to MM07. (Bottom/Left ) and particularly (Bottom/Right) yield results that are more like S09.

So my conclusion so far is that the choice seems to matter to the result, and I don’t know the right way to make the choice. There are other potential choices in computing the trend data from the satellite grid, but I think this is the biggest one. I would very much have liked to see in the supplemental information how this was done in each case.

I want to note that if you simply regress the surface temperature anomaly against the economic factors you still get significant results. Other geographic factors now become important as well, which makes sense because they aren’t being canceled out by the tropospheric data. I can’t think of any reason that there should be higher surface anomalies in areas of higher economic activity. This is another indication of issues in the surface temperature record. This wasn’t discussed in either MM07 or S09. Dr. McKitrick reports that this was discussed in a 2004 paper.

Regressing either satellite trend against the other factors also results in significant correlations although to a lesser degree (if you will pardon the pun). This isn’t so surprising since the lower tropospheric measurements of the satellites might be influenced by a broad range of man made surface changes. This was discussed in MM07 but not commented on in S09.

I want to thank both Dr. McKitrick and Dr. Schmidt for being so responsive to an amateur.

R scripts to see how I arrived at these conclusions can be found here.

A follow up using 5x5 grids is here.

The following is an update after I wrote the original post.


The following is the result I got using Top/Right and the UAH data. This is similar but not identical to MM07

Mean is .2339
Residuals:
Min 1Q Median 3Q Max
-0.85006 -0.11274 -0.00614 0.11036 0.62474

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -5.3795953 3.4086411 -1.578 0.11526
myuah_trop 0.9283931 0.0664378 13.974 <>
slp 0.0055687 0.0033623 1.656 0.09841 .
dryTRUE 2.6062329 4.1729660 0.625 0.53260
dslp -0.0024560 0.0041043 -0.598 0.54990
Water -0.0242371 0.0200943 -1.206 0.22842
abslat 0.0001628 0.0009495 0.171 0.86396
g 0.0392948 0.0171258 2.294 0.02225 *
e -0.0027524 0.0004521 -6.088 2.55e-09 ***
x 0.0041382 0.0035775 1.157 0.24803
p 0.3841318 0.1165618 3.296 0.00106 **
m 0.3947718 0.1431373 2.758 0.00607 **
y -0.2987748 0.1114165 -2.682 0.00761 **
c 0.0058406 0.0025162 2.321 0.02075 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.1764 on 426 degrees of freedom
Multiple R-squared: 0.5441, Adjusted R-squared: 0.5302
F-statistic: 39.11 on 13 and 426 DF, p-value: <>

On RealClimate.org Dr. Schmidt has told me that he selected Top/Right when he wrote his paper. The following are the results that I get when I select Top/Right. As you can see I still show significance at the 95% level for several of the economic variables including population. The significance is less than what I saw using the UAH data as posted above.

Mean trend is .2344
Residuals:
Min 1Q Median 3Q Max
-0.992645 -0.115114 -0.008233 0.111465 0.574084

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -4.9067505 3.4937453 -1.404 0.16092
myrss_trop 0.9485121 0.0738105 12.851 <>
slp 0.0049539 0.0034460 1.438 0.15129
dryTRUE 4.0677276 4.3150450 0.943 0.34638
dslp -0.0039007 0.0042443 -0.919 0.35859
Water -0.0139312 0.0205568 -0.678 0.49834
abslat 0.0031509 0.0008996 3.503 0.00051 ***
g 0.0479709 0.0174798 2.744 0.00632 **
e -0.0023729 0.0004656 -5.097 5.21e-07 ***
x 0.0058653 0.0037066 1.582 0.11430
p 0.2687887 0.1208100 2.225 0.02661 *
m 0.3385325 0.1474768 2.295 0.02219 *
y -0.2460588 0.1148984 -2.142 0.03280 *
c 0.0066888 0.0025751 2.597 0.00972 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.1809 on 426 degrees of freedom
Multiple R-squared: 0.5209, Adjusted R-squared: 0.5062
F-statistic: 35.62 on 13 and 426 DF, p-value: <>

I don't know why I am getting a different result than S09. It would be nice to see the code from that paper that processes the RSS data. Maybe I'm making a mistake of some kind. The trend mean looks pretty similar however.


Monday, January 5, 2009

Carbon Taxes - Nice Idea But They Won't Get The Job Done

Recently there has been renewed discussion in the US of using carbon taxes to reduce CO2 emissions. I should start by saying that of the various unilateral mechanisms this is probably the best idea. However if the goal is to solve the problem of atmospheric CO2 doubling this is a completely ineffective idea for a number of reasons. The first is that the amount of tax required would be far to high to be politically acceptable. The second is that even if that were accomplished in the US it would simply push energy consumption for manufacturing to other countries. Finally policies designed to keep that from happening are not practical.

As I stated a uniform tax is by far the best way to reduce CO2 output. It involves the least political meddling. It is easy to understand. And it would allow consumers and industrial users to make optimal choices about the best way to deal with the new circumstances. So if our goal is to reduce CO2 output by a modest amount over the coming decades then I vote for a carbon tax. But there is no way this will get close to getting the job done.

Assuming that we don't take any steps other than those that reduce CO2 output we need to reduce that output by at least eighty percent globally from current levels in the fairly near future. But putting off the global discussion for a moment we can take a look at what type of tax would be necessary to achieve an eighty percent reduction of CO2 output in the US alone.

The people proposing this seem to be ignoring the fact that there is already some limited experience with carbon taxes. The US and Europe have taxes on gasoline. Of course they are much higher in Europe. Also Europe has, since about 1999, implemented some limited forms of carbon taxes. A discussion of these taxes and the results can be found in the AR4 WG3. Specifically the taxes levels are discussed on page 481, and a discussion of the results can be found on page 756. The WG3 report can hardly be viewed as optimistic on the issue.

As an example in the UK a roughly 10-20% tax was placed on the industrial use of power from various sources. Even at that level there wasn't the political will to include all power users. The experience was similar in Germany where heavy users were exempted. In the case of the UK the estimate is that this reduced consumption by about 2%, which is a bit short of the 80% goal.

There has also been quite a bit of research on the price elasticity of energy. As one might expect the elasticity is pretty low. Even the recent experience of the massive increase in the price of oil showed that figures of -.20 might be high. (This would mean a .2% decrease in use for each 1% increase in the price.) Even using this figure the price of energy would have to quadruple in order to reduce the use by 80%. This would mean a tax rate of 300%. I'll hold my breath waiting for someone to propose that. You can take your pick of what tax rate would even be possible, but if one could be passed I am certain it will be nowhere near high enough to even move the needle more than say ten points. And that would probably be political suicide.

Here is the IPCC take on energy price elasticity;

The effect of energy taxes depends on energy price elasticity, that is the
percent change in energy demand associated with each 1%
change in price. In general, residential energy price elasticities
are low in the richest countries. In the UK, long-run price
elasticity for the household sector is only –0.19 (Eyre, 1998), in
the Netherlands –0.25 (Jeeninga and Boots, 2001) and in Texas
only –0.08 (Bernstein and Griffin, 2005).

I should note that it is a clear understanding of this equation that has caused the Europeans to try cap and trade to achieve their CO2 reduction goals. This is a kind of hidden tax with many disadvantages. But its biggest advantage is that it isn't called a tax. I think we are already seeing the problem however. If they set the targets low enough to have a significant effect, then they wind up backing down. But cap and trade is a discussion for another time.

Some make the argument that a tax would spur development of technologies that would get us some or all of the rest of the way. It certainly would cause some substitution of more expensive non carbon producing energy. This is a part of the long term elasticity, which is probably somewhat higher than the studies quoted above would indicate. I don't know that there is any evidence that it would cause the alternatives to get cheaper. Perversely it could have the opposite effect as the incentive to make alternatives less expensive would be reduced.

But even assuming that we could impose this carbon tax, and it did over time achieve the desired result in the US, it would have almost no effect on global emissions. After all this wouldn't slow CO2 output anywhere else. To make matters worse it is likely that a good portion of the reduction in the US would be achieved by exporting CO2 output to other countries. To be effective a carbon tax would need to be global in addition to being very large.

Some have proposed solving this portion of the problem by taxing imports from countries that don't tax carbon at least as high as we do. This has three problems. It is unworkable, it would likely cause a trade war, and it won't achieve the objective. (In the following paragraphs I use China as a proxy for the entire developing world.)

It is unworkable because there is no way of knowing the energy content of something being imported. Would we have a giant bureaucracy that assesses every item from every "non-compliant" country? How could they know whether the energy for a particular product was from a produced at a Chinese coal plant, or from Three Rivers Dam? If they don't differentiate then where is the incentive for China to use clean power, it is just an import tariff on Chinese goods. So China would still want to produce goods using the cheapest possible power. I guess we could assess China's overall CO2 output per unit of energy and hit every product based on our estimate of the amount of power it takes to produce. But that provides no incentive for energy efficiency, and it does provide a lot of incentive for obfuscation and argument. And we would have to make these types of judgments for every country, and every product. It is hard to imagine anyone thinking that this is practical.

The likely effect of this type of policy would be a trade war. The countries hit by this "energy tax" would view it as a tariff. Even among countries that think a tax is a good idea there would be endless arguments over the right amount of tax for various energy inputs. In retaliation they would impose tariffs on US goods and investments etc. For the results of this take a look at Great Depression in Wikipedia.

Finally it wouldn't achieve the objective. Even if China quietly accepted our tariff it wouldn't cause them to emit less CO2 it would just increase the costs of imports. We aren't China's only market and if their other customers don't coordinate then we wouldn't have that much influence. In fact the history of these kinds of things is that other countries would use this to gain advantage in the Chinese market.

Some have commented on the fact that things like VAT on imported items have been imposed without issue. There are two fundamental differences between a VAT and a carbon tax. First VAT is imposed based on the type of item not on the inputs to the manufacturing of the item. Second VAT on imports normalizes a tax regime relative to domestic products rather than penalizing foreign products.

The fact that VAT is based on the type of item is a crucial difference. When the item is coming into the country it is easy to see if it is a car, or clothing, or furniture. So it is feasible to determine the amount of tax that needs to be charged to normalize it with products manufactured domestically. In contrast there is no way to tell the amount of carbon that was output in the manufacture of a particular type of product, and so no reasonable way to impose a carbon tax on that item.

Without the ability to tax based on the carbon input to a particular item, which is the whole point of a carbon tax, we are left with the idea of taxing everything from a foreign supplier based on some conceptual level of carbon used in their manufacturing processes. We could then try to distinguish between the energy inputs on various categories of products, and tax them based on that concept. But at the end you are left with what is essentially a tariff on products from that country. Presumably you would be willing to negotiate the tariff lower based on your conclusion about their progress on carbon intensity, but this is much more political than economic.

In the end what you are trying to do is to get them to implement a carbon tax so that you will remove a burdensome import tariff. Trying to change the political and economic decisions in other countries using tariffs is essentially economic warfare. The long run outcome is very uncertain, and in the short run is likely to cause tariffs to be imposed in response with negative consequences.

I also comment that the VAT on imports doesn't have the same effect as a tariff politically because it is seen as creating a level playing field between imported and domestic products. As long as the same VAT is applied to domestic products of a similar type, then the foreign country is unlikely to feel the need to retaliate. This is a different response to a tariff where the foreign products are put at a disadvantage to domestic products.

One commenter brought up something like the Green Dot system which started in Germany. Again the difference is that at the time of importation it is easy to monitor compliance. Whether something is packaged, and has paid a tax or fee based on that packaging is quite obvious. There would be no equivalent way to know that amount of carbon used to produce and transport the same product.

There are many taxes and regulations on imported products in many countries. On automobiles The US has certain smog regulations which don't exist other places as well safety regulations. Again this are possible to impose because the item being imported can be inspected to determine whether it meets the regulation.

So then in summary we are extremely unlikely to impose a tax high enough to achieve the eighty percent goal in any useful time frame. Even if we do this will achieve nothing unless it is coordinated globally which is even less likely. A carbon tax is a good idea to achieve small reductions in the US output of CO2. As a policy for stopping dangerous greenhouse warming it will not be effective.

So what should we do? Try to minimize how much we put in while we achieve the crucial task of finding a way to get the CO2 out of the atmosphere. I will write more about that in my next post.

80% Is the Number

That is how much CO2 emissions would have to be reduced just to stabilize atmospheric carbon dioxide at current levels. The reason is that CO2 stays in the atmosphere for a very long time after we put it there. Quoting from the AR4 physical basis page 824;

"While more than half of the CO2 emitted is currently removed from the atmosphere within a century, some fraction (about 20%) of emitted CO2 remains in the atmosphere for many millennia. Because of slow removal processes, atmospheric CO2 will continue to increase in the long term even if its emission is substantially reduced from present levels."

The eighty percent figure does not appear directly, but can be interpolated from the graphs at the bottom of page 824.

What this means is that reducing CO2 output by less than eighty percent simply moves the peak atmospheric level forward in time assuming that humans wind up burning all the fossil fuels they can get to. (There are various international mechanisms that might keep this total down, but I am pessimistic about their implementation in relevant time frames as I will discuss in other posts.)

Based on my informal surveying I find that this figure is not understood by most people, yet it is critical to our understanding of the issue. Governments in developed countries are already straining in discussing figures like fifty percent cuts in thirty or fifty years. Developing countries are currently not even discussing restricting their growth rates, let alone reducing from current levels. Simple math shows that there is no possible way for this to result in an eighty percent reduction in current output anywhere in the foreseeable future. Instead it would seem just stabilizing output at current levels or even somewhat higher levels on a global basis will be challenging.

Once people understand the eighty percent figure most realize that the currently proposed approaches to solving the issue are completely unrealistic. We need to accept this and implement policies that could actually achieve the objective of lowering atmospheric CO2 levels which will certainly get higher before we can start reducing them.

At this point I believe the only possible way to reduce atmospheric CO2 is to come up with solutions that pull it back out of the atmosphere. All alternatives that achieve this need to be explored. Of course breakthroughs in low cost energy production will help reduce the size of this task.