Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Perth protest | Main | Green honesty »
Monday
Oct012012

Climate sensitivity and the Stern report

From time to time I have been taking a look at the Stern Review. It seems so central to the cause of global warming alarmism, and while there's a lot to plough through this does at least mean that one may come across something new.

As part of my learning process, I have been enjoying some interesting exchanges with Chris Hope of the Judge Business School at Oxford  Cambridge. Chris was responsible for the PAGE economic model, which underpinned Stern's work. The review was based on the 2002 version of the model, but a newer update - PAGE 2009 -  has now appeared and I have been reading up about this from Chris's working papers, in particular this one, which looks at the social cost of carbon.

The first major variable discussed in the paper is, as you would expect, climate sensitivity. The Stern Review came out around the same time as the IPCC's Fourth Assessment Report and so we would expect the take on this most critical figure to be the same in the two documents, and indeed I have seen no sign that this isn't the case. Indeed the working paper notes that the mean is virtually unchanged between since the time of Stern.

The mean value is unchanged from the default PAGE2002 mean value of 3°C, but the range at the upper end is greater. In PAGE2002, the climate sensitivity was input as a triangular probability distribution, with a minimum value of 1.5°C and a maximum of 5°C.

The Fourth Assessment Report reviewed all the major studies on climate sensitivity at the time and reported them in a spaghetti graph, which I've redrawn below:

Don't worry for the minute which study is which. We can, for the minute, simply note the very wide range of estimates, with modes between 1 and 3°C (ignoring the rather wacky black line). We also see that the distributions are all skewed far to the right, suggesting median values that are several degrees higher.

In the next diagram I superimpose these values on top of the values used in the 2009 version of the PAGE model.

As you can see the PAGE model (in red) seems to pitch itself right in the middle of the range, its distribution seeming to leave out the territory covered by the cooler peaks at the left hand side as well as the catastrophic values at the right. So far, this appears at least defensible.

Chris Hope summarises his values as follows:

The lowest values are about 1.5 degC, there is a 5% chance that it will be below about 1.85 degC, the most likely value is about 2.5 degC, the mean value is about 3 degC, there is a 5% chance that it will be above 4.6 degC, and a long tail reaching out to nearly 7 degC. This distribution is consistent with the latest estimates from IPCC, 2007, which states that “equilibrium climate sensitivity is likely to be in the range 2°C to 4.5°C, with a best estimate value of about 3°C. It is very unlikely to be less than 1.5°C. Values substantially higher than 4.5°C cannot be excluded, but agreement with observations is not as good for those values. Probability density functions derived from different information and approaches generally tend to have a long tail towards high values exceeding 4.5°C. Analysis of climate and forcing evolution over previous centuries and model ensemble studies do not rule out climate sensitivity being as high as 6°C or more.” (IPCC, 2007, TS4.5)

However, now we hit what I think is a snag: not all all of the estimates of climate sensitivity are equal. Most of the studies published in the IPCC report were either entirely based on climate model output or relied upon it to some extent. In fact there was only one exception: the paper by Forster and Gregory, which is the only wholly empirical study in the corpus. I'll highlight that one in this next diagram.

Now the picture seems to look rather less satisfying. We can see that empirical measurement is suggesting a low climate sensitivity with the most likely value at around 1.5°C. Higher values are driven by the modelling studies. Moreover, we can see that large ranges of values of climate sensitivity as implied by the empirical measurements of Forster and Gregory are not covered by the PAGE model at all. The IPCC's suggestion – that climate sensitivity is most likely to be in the range 2–4.5°C – is shown to be barely supportable and then only by favouring computer simulations of the climate over empirical measurements. This seems to me to throw lesson one of the scientific method out of the classroom window. And I really do mean lesson one:

So an examination suggests that the values of climate sensitivity used in the PAGE model are highly debatable. But of course it's actually even worse than that (it usually is). Close followers of the climate debate will recall Nic Lewis's guest post at Prof Curry's blog last year, in which he noted that the "Forster and Gregory" values in the IPCC graph were not the values that were implicit in Forster and Gregory's published results - the IPCC had notoriously chosen to restate the findings in a way that gave a radically higher estimate of climate sensitivity.

So next I replot the IPCC figures, but using the real Forster and Gregory results rather than the "reworked" ones:

So now we see that there is very little overlap between climate sensitivity as used in the PAGE model and empirical measurement of that figure. If we look back to the IPCC narrative, their claim that

Values substantially higher than 4.5°C cannot be excluded, but agreement with observations is not as good for those values.

looks highly disingenuous. When they say the agreement with observations is "not as good", do they not mean that there is almost no agreement at all? And when they say that values above 4.5 degrees cannot be excluded, do they not mean that they must be excluded, because they are ruled out by empirical observation?

If Feynman is to be believed, the climate sensitivity values used in the Stern review are "wrong". Perhaps some of my more policy oriented readers can explain to me why the political process would want to use figures that are "wrong" instead of figures that are right.

PrintView Printer Friendly Version

Reader Comments (106)

Concerning Mike Haseler’s comments (Oct 2, 2012 at 10:44 AM), I provided links to the IPCC tables because they are quite candid about all the things that the models either ignore or account for with a high degree of uncertainty. I therefore assume that if Chris Hope simply reads these IPCC references he will come away with a clear understanding that the models are far from perfect and that their predictions have a high degree of uncertainty. Moreover, that cannot be ‘averaged out’ by multiple runs or scenario ensembles because they may all have a common missing factor that will likely give rise to a common bias.

Concerning Robin Guenier’s question (Oct 2, 2012 at 2:17 PM), my simple understanding of the Scientific Method is: a) hypothesize; b) predict; c) test; d) compare prediction with test results; e) change or elaborate hypothesis according to outcome of comparison; f) repeat b) through e) until funds are exhausted or you get a Nobel Prize :-)

Based upon this understanding, it seems to me that you asked your speaker for d) but got a philosophical discussion about a) and the obvious fact that models are used to express a complex hypothesis. I’ll let you be the judge as to whether this was a genuine misunderstanding or a deliberate attempt to confuse the issue.

Oct 2, 2012 at 11:48 PM | Unregistered CommenterDave Salt

Sean Houlihane asks what the $CO2/tonne cost is at a distribution of 0.9, 2.0 and 5.0 and gets a response from Chris Hope of cost of $74 per tonne of CO2 (keeping all other assumptions the same).

Running the model with a climate sensitivity of 1 degC, and no other changes gives a mean SCCO2 of just under $7 per tonne of CO2.

And the current market price is 1-2 euros per tonne of CO2 ?

Perhaps the market price is telling us what the real TCR is. Maybe the imbalance between the market price and the modelled price is a measure of the cost to the economy being imposed by imperfect analysis and bad policy in which case the federal government in Australia (pitching a price at $25/tonne) gets the prize. Almost like a good Chinese 5 year plan really.

BTW thanks to Chris Hope for engaging with us sceptics lurking in the crevices of the blogverse and for Bishop keeping the tone civil.

Oct 3, 2012 at 1:06 AM | Unregistered Commenterlittle polyp

Reading the Forster Gregory 2006 paper as Chris Hope suggested, I noted that they had said it would be good to use more of the scanning radiometer data. Since they used only 5 years worth (ending in 1996) I sent Forster an email this morning asking if they were planning on extending their coverage. Forster responded (in 8 minutes flat!) including a reference to a later paper (Murphy 2009) using another 9 years of data. In the email, however, Forster said they were moving away from considering lambda a good estimate of climate sensitivity. As I read the paper, they are thinking that the value of lambda based on short-term measurements is different from the long-term equilibrium value. This results in a rather weak upper limit of about 10 C per doubling of CO2. They also mention other work putting a more stringent lower limit near 2 C per doubling. They state "We adopt lambda = 1.25 ± 0.5 W/m2/K as an estimate for the response of net radiation to temperature variations between the [sic] 1950 and 2004." Later on, however, they state a range of 0.04 to 1.25 W which "does not exclude the inverse climate sensitivity of 0.37 W/m2/K for a 10C warming for doubled CO2." So if their new range is something like 2-10 C per doubling, they are now, as they state, generally consistent with the IPCC AR4.

An observationally based energy balance for the Earth since 1950
D. M. Murphy, S. Solomon, R. W. Portmann,1K. H. Rosenlof, P. M. Forster,
and T. Wong
JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 114, D17107, doi:10.1029/2009JD012105, 2009

Oct 3, 2012 at 5:16 PM | Unregistered CommenterLance Wallace

Lance Wallace

I don't trust the 2009 Murphy paper that used a similar method and arrived at higher sensitivity - it gave completely different results from Forster & Gregory 2006 when apparently using almost the same data, with no explanation for the difference despite Forster being a co-author. A poor show IMO, as a result of which I have little confidence in the results of Murphy 2009 generally.

F&G 2006 using ERBE WFOV annual data for 1985-1996 excluding 1993 and obtained Y_net of 2.5 W/m^2/K with HadCRU temperature data. F&G 2006 excluded 1997 data since the GISS and HadCRU global temperature records for 1997 were very different, and excluded 1993 and 1998+ due to gaps in the ERBE data. I have been able to replicate, approximately, the F&G 2006 results using information in that paper.

Murphy 2009 using ERBE WFOV annual data for 1985-1998 excluding 1991-1993, with 1998 a created composite estimate, and obtained Y_net of 0.0 W/m^2/K with HadCRU temperature data.

Murphy 2009 restricted the surface temperature data to 60S-60N on the grounds that the ERBE WFOV sensor only covered that latitude range, but according to NASA the 126 degrees field of view of the WFOV sensor has a view of the entire earth disk at satellite altitude, so that looks dubious. But the biggest factor may be the omission of 1991 and 1992 data, a key period for the regression due to the natural experiment represented by the eruption of Mount Pinatubo.

Nor does Murphy et al have an adequately detailed method description to replicate the study, and the auxiliary materials referred to in the paper do not appear to exist on the AGU's server.

I wouldn't read anything much into the non-experimentally derived comments in Murphy 2009 that you refer to, such as their new range being generally consistent with IPCC AR4 WG1. I note that Murphy works in the same department as Susan Solomon, overall lead author for AR4 WG1, who is also the second author of the paper. So there may have been an agenda behind the paper.

BTW, MIT professor Dick Lindzen argues that the F&G 2006 (and Murphy 2009) method is likely to overestimate climate sensitivity S due to an inability to distinguish between cloud changes acting as forcings from those constituting feedbacks. F&G also referred to this issue in their paper. Lindzen & Choi's papers based on similar methods argue for S < 1, but as they are based only on tropical data it is a bit difficult to interpret the results.

Oct 3, 2012 at 8:34 PM | Unregistered CommenterNic Lewis

Nic Lewis--

Well, OK, but even granting your comments, what about the additional 6 years of CERES data (2000-2005) that Murphy 09 included? The slopes of the regressions of ERBE 1985-1999 appear to be pretty similar to the slopes of the CERES data.

Oct 4, 2012 at 12:30 AM | Unregistered CommenterLance Wallace

mmmmmm? Who would I favor? James Hansens word or yours? Correct! His!

Dec 18, 2012 at 7:03 AM | Unregistered CommenterRob Slaney

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>