Click images for more details



Recent comments
Recent posts
Currently discussing

A few sites I've stumbled across recently....

Powered by Squarespace
« Perth protest | Main | Green honesty »

Climate sensitivity and the Stern report

From time to time I have been taking a look at the Stern Review. It seems so central to the cause of global warming alarmism, and while there's a lot to plough through this does at least mean that one may come across something new.

As part of my learning process, I have been enjoying some interesting exchanges with Chris Hope of the Judge Business School at Oxford  Cambridge. Chris was responsible for the PAGE economic model, which underpinned Stern's work. The review was based on the 2002 version of the model, but a newer update - PAGE 2009 -  has now appeared and I have been reading up about this from Chris's working papers, in particular this one, which looks at the social cost of carbon.

The first major variable discussed in the paper is, as you would expect, climate sensitivity. The Stern Review came out around the same time as the IPCC's Fourth Assessment Report and so we would expect the take on this most critical figure to be the same in the two documents, and indeed I have seen no sign that this isn't the case. Indeed the working paper notes that the mean is virtually unchanged between since the time of Stern.

The mean value is unchanged from the default PAGE2002 mean value of 3°C, but the range at the upper end is greater. In PAGE2002, the climate sensitivity was input as a triangular probability distribution, with a minimum value of 1.5°C and a maximum of 5°C.

The Fourth Assessment Report reviewed all the major studies on climate sensitivity at the time and reported them in a spaghetti graph, which I've redrawn below:

Don't worry for the minute which study is which. We can, for the minute, simply note the very wide range of estimates, with modes between 1 and 3°C (ignoring the rather wacky black line). We also see that the distributions are all skewed far to the right, suggesting median values that are several degrees higher.

In the next diagram I superimpose these values on top of the values used in the 2009 version of the PAGE model.

As you can see the PAGE model (in red) seems to pitch itself right in the middle of the range, its distribution seeming to leave out the territory covered by the cooler peaks at the left hand side as well as the catastrophic values at the right. So far, this appears at least defensible.

Chris Hope summarises his values as follows:

The lowest values are about 1.5 degC, there is a 5% chance that it will be below about 1.85 degC, the most likely value is about 2.5 degC, the mean value is about 3 degC, there is a 5% chance that it will be above 4.6 degC, and a long tail reaching out to nearly 7 degC. This distribution is consistent with the latest estimates from IPCC, 2007, which states that “equilibrium climate sensitivity is likely to be in the range 2°C to 4.5°C, with a best estimate value of about 3°C. It is very unlikely to be less than 1.5°C. Values substantially higher than 4.5°C cannot be excluded, but agreement with observations is not as good for those values. Probability density functions derived from different information and approaches generally tend to have a long tail towards high values exceeding 4.5°C. Analysis of climate and forcing evolution over previous centuries and model ensemble studies do not rule out climate sensitivity being as high as 6°C or more.” (IPCC, 2007, TS4.5)

However, now we hit what I think is a snag: not all all of the estimates of climate sensitivity are equal. Most of the studies published in the IPCC report were either entirely based on climate model output or relied upon it to some extent. In fact there was only one exception: the paper by Forster and Gregory, which is the only wholly empirical study in the corpus. I'll highlight that one in this next diagram.

Now the picture seems to look rather less satisfying. We can see that empirical measurement is suggesting a low climate sensitivity with the most likely value at around 1.5°C. Higher values are driven by the modelling studies. Moreover, we can see that large ranges of values of climate sensitivity as implied by the empirical measurements of Forster and Gregory are not covered by the PAGE model at all. The IPCC's suggestion – that climate sensitivity is most likely to be in the range 2–4.5°C – is shown to be barely supportable and then only by favouring computer simulations of the climate over empirical measurements. This seems to me to throw lesson one of the scientific method out of the classroom window. And I really do mean lesson one:

So an examination suggests that the values of climate sensitivity used in the PAGE model are highly debatable. But of course it's actually even worse than that (it usually is). Close followers of the climate debate will recall Nic Lewis's guest post at Prof Curry's blog last year, in which he noted that the "Forster and Gregory" values in the IPCC graph were not the values that were implicit in Forster and Gregory's published results - the IPCC had notoriously chosen to restate the findings in a way that gave a radically higher estimate of climate sensitivity.

So next I replot the IPCC figures, but using the real Forster and Gregory results rather than the "reworked" ones:

So now we see that there is very little overlap between climate sensitivity as used in the PAGE model and empirical measurement of that figure. If we look back to the IPCC narrative, their claim that

Values substantially higher than 4.5°C cannot be excluded, but agreement with observations is not as good for those values.

looks highly disingenuous. When they say the agreement with observations is "not as good", do they not mean that there is almost no agreement at all? And when they say that values above 4.5 degrees cannot be excluded, do they not mean that they must be excluded, because they are ruled out by empirical observation?

If Feynman is to be believed, the climate sensitivity values used in the Stern review are "wrong". Perhaps some of my more policy oriented readers can explain to me why the political process would want to use figures that are "wrong" instead of figures that are right.

PrintView Printer Friendly Version

Reader Comments (106)

Just glanced at the paper, but couldnt see (it may be there) the positive impacts of CO2 or temperature rise. That is improved agricultural output and reduced fuel bills. Anyone know if it is in the model as surely these things impact social costs?

Oct 1, 2012 at 4:06 PM | Unregistered CommenterS Matthews

Jake Haye: I'm not quite sure what kind of bias you are implying. The previous version of the model was largely funded by Ofgem.

Richard Drake: I agree that 2 is the key issue. I'm not a climate scientist. In the default model, I try to use a climate sensitivity distribution that approximates the range provided by climate scientists. I am happy for others to use the PAGE model with other distributions and see what effect they have (I've done a couple of calculations in response to comments further up the thread). If what you say about the IPCC is true, I presume the climate scientists' views will change in the 5th assessment report.


Oct 1, 2012 at 4:08 PM | Unregistered CommenterChris Hope

Climate scientists say that models are needed because, in the absence of a real controlled experiment, they do the next best thing and do a controlled experiment on a mathematical representation of the Earth.

I believe most would agree that this is a completely valid approach **if** the mathematical representation of the Earth is sufficiently detailed (i.e. it includes all key physical processes and interactions). However, it is openly stated by the IPCC that some of the key details (e.g. the effect of clouds) are poorly modelled or even ignored, so there must be considerable uncertainty surrounding the conclusions drawn from them.

For example, the IPCC's assessment of radiative forcing is based upon a list of sources...
...that are used to assess the range of uncertainty with respect to the climate's 'sensitivity' to a doubling of CO2. However, I note that this list of sources does not include those listed with a 'Very Low' Level of Scientific Understanding (LOSU) in this table...

This suggests that, if all sources were considered, the true uncertainty may well be very much larger. Moreover, the absence of any key factor (e.g. cosmic rays) could also give rise to a significant bias in all models and so undermine any attempt at averaging out their uncertainties over multiple runs.

I’ve pointed this out many times before and freely admit that I may have missed some subtle step in the IPCC process that was able to compensate for such uncertainties. However, no one has yet been so kind as to point out my mistake and so I continue to be highly sceptical of these model sensitivity assessments.

Oct 1, 2012 at 4:30 PM | Unregistered CommenterDave Salt

Of course there is nothing wrong with Excel, if the underlying equations are published. The same result will be obtained with R, Matlab, Mathematica, awk, perl, tcl, openoffice, pencil and paper, etc. Excel is convenient as it is widely distributed, understood, and used (Phil Jones notwithstanding).

I was puzzled by this statement in Andrew's text:

"However, now we hit what I think is a snag: not all all of the estimates of climate sensitivity are equal. Most of the studies published in the IPCC report were either entirely based on climate model output or relied upon it to some extent."

Does this mean that the majority of the distributions plotted above are actually input to GCMs?

Oct 1, 2012 at 4:38 PM | Unregistered CommenterZT

A few clarifications:

1. The climate sensitivity graphs are PDFs (probability density functions); the x-axis is climate sensitivity in K (= deg. C), defined as the equilibrium rise in global mean temperature for a doubling of atmospheric CO2 equivalent; the y-axis is probability density, which measures the relative probability of each climate sensitivity value on the x-axis being the true value. The area under each curve is one, since it equals the total probability, and climate sensitivity must have some value. (Climate sensitivity is treated as having a fixed value and the possibility of it exceeding 10 K was ignored in the IPCC graph.)

2. The Forster & Gregory study had little dependence on climate models. As it involved global measurements of changes in radiation at the top of the atmosphere with surface temperature, it would take varying humidity of the air into account. I recommend reading the study. It is available for free at

3. HaroldW is correct in saying "I'd also suggest that the observational constraint on TCR, at the upper end, is stronger than the models' limit of 3 K."
TCR is the rise in global temperature at the end of a 70 year period during which CO2 levels, rising at 1% pa compound, doubled. An important recent study (Gillett et al, 2012) derived a 5-95% range for TCR of 1.3 to 1.8 K, significantly below most GCM simulation results. It found that the 1901-2000 period commonly used for instrumental studies, and to tune GCM parameterisations, gave abnormally high estimates of TCR (and therefore of climate sensitivity) due to the first two decades of the twentieth century being exceptionally cold. Other periods (1851-2000, 1901-2010 and 1851-2010) gave substantially lower estimates, which were to be preferred.
Note that if ocean heat uptake is quite low, as both measurements and observationally-constrained studies indicate, then if TCR is low climate sensitivity will not exceed TCR very much.

Oct 1, 2012 at 4:49 PM | Unregistered CommenterNic Lewis


"Excel was always frowned on as a calculational tool because of too many hidden assumptions and it could never be verified"

IIRC, it could be verified, but never was. Microsoft were always a bit careless in that respect, but they probably assume that any serious flaws would have become apparent by now. Always preferred Quattro, myself...

Oct 1, 2012 at 4:54 PM | Unregistered CommenterJames P

So the answer to the model sensitivity question is that the original Forster and Gregory distribution alone gives a not too bad answer, including any of the other climate sensitivity models at all has a significant impact on the result.
To argue that if the cost model is right, for the cost to be manageable you would need to invalidate even the more moderate sensitivity model results. This kind of supports the assumption that the exact distribution is not too critical to the analysis.

Oct 1, 2012 at 5:18 PM | Unregistered CommenterSean Houlihane

Dave Salt: because the black body emission claim for the Earth's surface is plain stupid, the climate models are bunkum. In reality, the real GHG is thermal IR from GHGs reducing surface emissivity.

Oct 1, 2012 at 5:23 PM | Unregistered CommenterAlecM

Nic -
Thanks for the Gillett reference, which I will have to re-read. Concerning the other question I raised, on the joint distribution of transient climate response response (TCR) and climate response time (FRT), are you aware of a collation of models' TCR & FRT (or perhaps TCR & ECS), which might shed some light?

Oct 1, 2012 at 5:36 PM | Registered CommenterHaroldW

Just let's remember that CO2 is a trace gas and, whilst there is a suggestion that it might possibly increase global average temperatures (whatever that really means), the fact is that there has been no statistically significant warming this century. It is also known from both geological and historical records that CO2 has been much higher in the past and so has temperature. But that, if anything, it is temperature that drives CO2 and not the reverse.

So the null hypothesis must be that climate sensitivity must be very low and that the effects of increasing CO2 levels are likely to be trivial so far as temperature is concerned and beneficial in terms of plant growth.

It is also clear that rent seeking and politically driven "activists" have been ever more shrill in sceaming doom and more active in shroud waving and that all their dread predictions, without exception, have been proved to be bunk.

So, after finding apple after apple after apple in the barrel to be rotten, we should expect to find good fruit lower down?

I don't think so.

I'm sure Chris Hope is a decent, well meaning bloke. But I'm afraid I wouldn't believe anything that anyone who was been associated with this hugely damaging scam might say.


Oct 1, 2012 at 5:55 PM | Unregistered CommenterMartin Brumby

HaroldW wrote:
"are you aware of a collation of models' TCR & FRT (or perhaps TCR & ECS), which might shed some light"
You could start by looking at Frame et al "Alternatives to stabilization scenarios", Geophysical Research Letters, 2006, Vol 33, L14707. It is freely available somewhere on the web. It only deals with 4 models. I'm sure there are papers that deal with many more models, but I can't find one at present.

Oct 1, 2012 at 6:11 PM | Unregistered CommenterNic Lewis

This is what we get with models today...

"Researchers modelled the impact of rising temperatures on more than 600 species between 2001 and 2050.... Fish species are expected to shrink in size by up to 24% because of global warming, say scientists... The scientists argue that failure to control greenhouse gas emissions will have a greater impact on marine ecosystems than previously thought."

The primary objective of any model, it seems, is to support and promote a policy prescription. And policy in one direction only: mitigation, and the control of energy production, distribution and consumption that will lead us to sustainia, the global Republic, a radical decarbonisation and a life fit for hobbits in the shires.

Oct 1, 2012 at 6:45 PM | Unregistered CommenterJustin Ert

@Oct 1, 2012 at 11:04 AM | John Silver

""social cost of carbon"

Well, if you don't have it in the winter, you may freeze to death.
Thereby eliminating yourself from all social context.

I would add "Social cost of carbon? To a carbon based life form on a carbon based planet? What on earth are you thinking of [snip]?"

Oct 1, 2012 at 7:54 PM | Unregistered CommenterJeremy Poynton

I don't even accept the premise of having a "climate sensitivity" number that people can bandy back and forth as if it was kind sort of a real-world constant that actually meant something useful.

Oh get real! The world needs numbers or we would all blunder around talking about "big" and "small" like we were cavemen.

When people ask your income, do you say – "I'm sorry I can't answer that, as it differs from year to year, according to ...". Or do you give a number that is, in reality, an approximation?

The world is stuffed full of words that are approximations with no hard link to reality, but are nonetheless quite useful – inflation, unemployment rate, median income. Even ones we think of as hard and real often aren't – World War One did not end on 11 November 1918, inasmuch as people kept on killing each other for the same reasons they had before.

"Climate sensitivity" is reasonably well defined and even testable, in theory. Worrying about things like that doesn't make you a sceptic, it makes you look like a knee-jerk opponent of everything. Just like fretting about the trivial errors in Excel, actually.

There's some useless statistics in Climate Science – "global temperature" prime among them – but if we throw away all numbers then we are regressing back to a pre-scientific age.

Oct 1, 2012 at 8:37 PM | Unregistered CommenterMooloo

Mooloo -
If you accept that "climate sensitivity" is reasonably well-defined, you have to accept "global temperature" as well as a metric. The operational definition of equilibrium climate sensitivity is the steady-state change in global average surface temperature corresponding to a defined change in forcing (typically doubling of pCO2, in other contexts a unit change of 1 Wm-2).

Oct 1, 2012 at 9:16 PM | Registered CommenterHaroldW

Perhaps some of my more policy oriented readers can explain to me why the political process would want to use figures that are "wrong" instead of figures that are right.

Maybe the "political process" does not look at the data, but the interpretations of the data from trusted scientific advisers spin doctors. The weighting that is given to evidence is not as a criminal court might, but the opposite. Hearsay ("the consensus of scientists believe") is given prominence over direct evidence. The weak circumstantial evidence of climate models is given prominence over the stronger circumstantial evidence of empirical data. Furthermore, the independent corroboration of climate models by empirical tests is not carried out.

Oct 1, 2012 at 10:53 PM | Unregistered CommenterManicBeancounter

here are some reasons:

30 Sept: Daily Mail: Wind farms given £34m to switch off in bad weather: Households stung by secretive payments
Wind farm operators were paid £34million last year to switch the turbines off in gales.
Two days last week saw householders effectively hand £400,000 to energy firms for doing nothing...
It was always known the National Grid made ‘constraint payments’ – cash given to operators to temporarily shut down their turbines when electricity supply outstripped demand.
But what was not made public were details of so-called ‘forward trades’, in which the National Grid agrees a pay-out when the weather is expected to be stormy.
The money is paid out even before a turbine shuts down...
The National Grid has admitted £15.5million was paid out to energy operators in the form of conventional constraint payments in 2011-12 in England and Scotland.
But for the first time it has emerged that an even greater sum – £18.6million – was paid out in forward trades. It means the total payments for that year were £34.1 million, far higher than previously reported...
Murdo Fraser, a member of the parliament in Scotland, where many wind farms are sited, said: ‘Why have the authorities been so anxious not to release this information? Is it because they feared this would undermine any remaining public confidence in renewable energy policy?
‘People will wonder if they were trying to cover up the truth.
‘The revelation that vast sums are being paid to wind power developers will just lead to more and more people questioning government policy.’
Details of which energy firms scooped the money is kept secret because of ‘commercial confidentiality’...

Oct 1, 2012 at 11:10 PM | Unregistered Commenterpat

Nic -
Thanks for the reference to Frame et al. Certainly a different way of looking at the problem, and one which avoids the distraction of a hypothetical equilibrium in the distant future, said equilibrium being unlikely to be reached if only because the technology of a century hence is unknowable at present. Long-term greenhouse gas concentration trajectories such as they present seem more plausible to this reader than the mere extrapolation of the current upwardly-convex history.

Oct 2, 2012 at 3:45 AM | Registered CommenterHaroldW

Oct 1, 2012 at 12:34 PM | chris hope
(A model run where changing CO2 does not affect temperature directly).

No, It's not right to change words. Your reply was "if an extra tonne of CO2 makes no difference to the climate, the extra damage it causes is zero." My inquiry was "CO2 makes no difference to TEMPERATURE" There's a big difference, temperature is not climate. In a rudimentary case, CO2 increase causes increased crop yields and affects farm economics.
In a non-rudimentary case, if CO2 is constrained to not change temperature, we have a buyers' market in windmills.

Oct 2, 2012 at 5:31 AM | Unregistered CommenterGeoff Sherrington

"...a buyers' market in windmills."

Ha! What a nice phrase.

Oct 2, 2012 at 8:37 AM | Unregistered CommenterAlan Reed

An excellent post and kudos to Chris for joining in.

Am I correct in thinking from this that 'if' there is a feed back response from a doubling of CO2 then the empirical data suggests that the most likely temperature increase will be 1.5K per century From this we can determine that CAGW should really be aGW.

Ties in nicely with Jo Nova's latest, if the site stays up long enough.

The models didn’t correctly predict changes in outgoing radiation, or the humidity and temperature trends of the upper troposphere. The single most important fact, dominating everything else, is that the ocean heat content has barely increased since 2003 (and quite possibly decreased) counter to the simulations. In a best case scenario, any increase reported is not enough. Models can’t predict local and regional patterns or seasonal effects, yet modelers add up all the erroneous micro-estimates and claim to produce an accurate macro global forecast. Most of the warming happened in a step change in 1977, yet CO2 has been rising annually.

Oct 2, 2012 at 8:41 AM | Unregistered CommenterLord Beaverbrook

Bish, I think you and Feynman have correctly explained Trenberth's "missing heat": never was there to be lost. To add to Lord Beaerbrook's reporting of Jo Nova's sea temps and Dr. Roy Spencer's lower atmosphere temps we have the Scott Polar Research Institute's 2012 news;

“To generalize our results, the tree line is definitely moving north on average but we do not see any evidence for rates as big as 2 kilometers per year anywhere along the Arctic rim,” he said in a news release. “Where we have the most detailed information, our results suggest that a rate of around 100 meters per year is more realistic. In some places, the tree line is actually moving south. The predictions of a loss of 40 percent of the tundra by the end of the century is probably far too alarming.”

And of course, polar ice at a record high.

CaGW is a rapidly failing hypothesis and with all of the climate industry based upon it, a very expensive folly.

Oct 2, 2012 at 9:43 AM | Unregistered Commenterssat

Referring to PAGE 2009 version (p.4) Chris Hope states: “The carbon cycle feedback (CCF) is introduced as a linear feedback from global mean temperature to a percentage gain in the excess concentration of CO2, to simulate the decrease in CO2 absorption on land and in the ocean as temperature rises (Friedlingstein et al, 2006).”
This is the first of not a few serious flaws in the PAGE models, as there is NO evidence for “a a linear feedback from global mean temperature to a percentage gain in the excess concentration of CO2”, and there is even less evidence for a “decrease in CO2 absorption on land and in the ocean as temperature rises”.
An impeccable source for the lack of evidence for the Hope-PAGE claims is W. Knorr, one of the 27 or so co-authors of Friedlingstein et al., see his “Is the airborne fraction of anthropogenic CO2 emissions increasing?” (GEOPHYSICAL RESEARCH LETTERS, VOL. 36, L21710, doi:10.1029/2009GL040613, (October) 2009).

Knorr’s answer is an emphatic NO, which may explain why his paper does not rate a mention in Hope 2011. Knorr shows that since the 1850s “only around 40% of those emissions have stayed in the atmosphere, which has prevented additional climate change”, whereas if there was feedback as claimed by Hope there should be a steadily rising trend. I myself have also shown using the GCP data from 1958 that there has indeed been no such trend using CDIAC emissions data and Mauna Loa CO2 data (E&E October 2009).

In short, PAGE is on shaky ground when one of its core assumptions is demonstrably false.

Oct 2, 2012 at 9:45 AM | Unregistered CommenterTim Curtin

Lord Beaverbrook: the simple fact is that there is no credence ion any prediction of CO2-AGW because the physics in the models is wrong from the very start. I know I'm a lone voice in the wilderness but an increasing number of people agree with the physics am proposing to fix the mistakes.

Oct 2, 2012 at 9:50 AM | Unregistered CommenterAlecM

Dave Salt: "I believe most would agree that this is a completely valid approach **if** the mathematical representation of the Earth is sufficiently detailed "

Dave, I read Chris Hope's paper (or the relevant bit) and was thinking like you. Your on the right track, but someone "like" Chris Hope, is so utterly convinced they are right that, I cannot easily think of a way to convince him otherwise ... except waiting to show his approach is not working.

But you are right, the key is the size of the unknowns (should I say unknown unknowns?). Because Chris Hope does not seem to be trained to deal with this kind of problem where the unknown unknowns dominate.

The reason I say "like" is not to denigrate but because Chris Hope has learnt to approach problems in a particular way. The type of problems his "toolbox" is geared up to work with are blackbox problems where the inputs and outputs are known, and the system is presumed to be linear and hey-presto, all you need is a simple model and if the inputs and outputs match (historically) then it MUST be a good representation. And, so I understand why he is so persuaded by this approach.

In contrast, many people here I think are very familiar with "black box" problems, except we are very familiar with problems where instead of being outside the well contained black box, we are in the black box., so to speak where the problem is so interconnected with other events and inputs that it is very difficult to untangle what is going on. It is as if we are in the box, with just a few wires and pulleys dangling inside & and a grubby black and white monitor pointing in one direction (all usually in need of some TLC).

So, whilst we do not reject the black box/modelling approach for simple systems, we are also very aware that such approaches fail abysmally in many real world situations. And, there are some classic indicators of problems:

1. The models do not predict (e.g. the current pause)
2. The modellers flip completely from one model to another ... "global cooling" to "global warming"
3. There is denial that the models aren't predicting, and denial that they have changed.
4. There is not equity of assessing potential inputs (solar/cosmic e.g. is not treated EQUALLY). If the rules of this modelling exercise allow CO2's effect to be scaled up by 300%, then it allows all inputs to be scaled up by 300%. So, you cannot reject solar activity because it is not a perfect fit.
5. Hiding of evidence .... another clear indicator that something is wrong, and that the culprit knows something is wrong.
6. Poor quality temperature data .... and more importantly: denial that it is poor quality data
7. A failure to understand the variability or to include "natural variation" (aka known unknowns) as a part of the model.

How can I be so certain Chris is wrong? The most compelling evidence why he is wrong, is that I used to think like him, but them one day I said to myself: "think professionally. Imagine these graphs are temperature and various inputs from a machine that I as an engineer was responsible for. Imagine the temperature graph had been showing a 'worrying increase' and that this other 'input' was shown to be increasing. How would I reasonably respond".

My actual response was ... "oh my god why haven't I seen this before?" Why couldn't I see this dispassionately in this way?

But as soon as I stepped back from the problem and looked at it professionally, I realised I had seen plenty of situations like this. A measurement going off to infinity, panicky directors all in a fluff. What's the first thing one does:

**Check the readings are accurate** What is the first thing the sceptics did: CHECK THE GLOBAL TEMPERATURE WAS RELIABLE. ... and what was the response ... to call us denialists.

What is the next thing? What else could have caused it? .... what do sceptic look at ... the solar activity and we get told (in a way that is completely unbelievable) that it cannot be solar.

By this point, the engineer in me, is beginning to realise that the bigger problem is as much human as "machine". I need to investigate not only the original reading but the attitude of the people taking the reading. Often, this comes down to one department having a petty squabble with another, and they were trying to "prove a point" and exaggerate the "problem", which doesn't mean it doesn't exist, but it does mean the real problem may be somewhere other than the machine.

And in my experience, very few of these catastrophic "problems" were what they seemed when first presented. And e.g. one machine I attended, which I was assured had a fault in the control system, turned out to have never been greased ... and a bit of grease (which they just had not thought about) fixed it.

So, one learns never to assume the original problem as presented is the real problem. There are human, instrumentation, inter-machine problems.

And do I expect Chris Hope to suddenly "see the light" ... not on your nelly. When I went from a science degree to industry, I was just as arrogant as he and it took several years of humbling experience to realise that science is useful in real life but experience is what really counts.

Oct 2, 2012 at 10:44 AM | Registered CommenterMikeHaseler


I know your stance on radiative physics and don't entirely disagree. It will be good to see this paper that you are writing hit the streets, so to speak.

BUT, when looking at policy, and in particular a change in direction of policy, the diplomat will achieve more by gentle pressure on the pilots arm than hanging over the side of the boat screaming, this way.

Oct 2, 2012 at 10:50 AM | Registered CommenterLord Beaverbrook

... why the political process would want to use figures that are "wrong" instead of figures that are right

Same reason the process at tobacco company laboratories used figures that showed smoking was harmless - ie vested interest.

Here, the political process wants to fatten and further empower itself, and doesn't want to let mere facts stand in its way. So it changes them.

Oct 2, 2012 at 10:58 AM | Unregistered CommenterTomcat

Vis a vis your last paragraph:

"If Feynman is to be believed, the climate sensitivity values used in the Stern review are "wrong". Perhaps some of my more policy oriented readers can explain to me why the political process would want to use figures that are "wrong" instead of figures that are right."

My early career (mid-late 1970's) involved significant amounts of work on models (primarily for the US Government, but also to consortia of private clients) which were meant to explain the energy demands of the household sector (heating (air and water), lighting, air conditioning, appliances, etc.) and how that sector might respond to price changes and the introduction of energy saving technologies (insulation, solar, appliance design). At the time, the only models available to do so were vast econometric Leontievian input-output matrices, based on macroeconomic data and supported by the best mathematics and computing power of the day, observed sensitivities to various inputs of those data, and then forecasts based on the seemingly most influential inputs (generally the cost of energy, in all it's variants). The "problems" were that:

1. In 1973 (when I started this work), the models did not seem to be acting "correctly." Due to the Yom Kippur War and the rise of OPEC, prices were rising rapidly (from $4/bbl of oil to $7, or so!) but the demand from the household sector remained relatively stable.

2. The econometric models had no credible mechanisms to forecast based on the introduction of new technologies (particularly solar).

Our solution to those problems ( I was working with Arthur D. Little, Inc. at that time) was to develop a disaggregated model based on a modelling of both how energy was actualy spent in each houeshold and how household's reacted to choices as to whether or not to invest in energy savings technologies. It turned out that we were far more successful in predicting responses to external changes that the existing models, and the last time I looked (a year or two ago) the bare bones of the model developed in 1974 is still in use at the US Government for the same purposes (although full of all sorts of bells and whistles which have improved our primitive efforts).

There were many struggles to gain acceptance for these models, but to get to the essence of your last paragraph, here are my best memories as to why we had often antagonstic "struggles" rather than constructive "conversations" at the policy level:

1. Our clients (all predecessors of the Current DOE) had invested significant funds in developing and sustaining the econometric models that we were questioning and "competing" against. Nobody, particularly an organisational functionary, wants to admit that the model he or she had championed for some time (and spent a lot of taxpayer money on) might not be as predictive as others.

2. Our models deviated from the lockstep supply-demand approach to look at the real world. As a simple example, we postulated (and later showed) that the rapid increases in the past in the demand for electricity was not as related to the price of electricity (which had been falling in real terms for years) but rather the fact that technologies had been introduced (e.g. refrigerators, air conditioners) which could never continue to grow at such a rapid rate (once you have achieved 200% penetration--one refrigerator in the kitchen, one in the garage how can the growth in demand do anything but slow down?). For another example, we proved that the rapid growth in demand for fossil fuel heating in the post-war period was largely related to the replacement of wood burning stoves with gas and oil boilers, rather than crude responses to the relative price decline in prices.

3. Finally, many of our clients (the policy makers and/or implementors) were "true believers" in the righteousness of their policy/technology. As such, to the degree that the facts (or even tentative postulations of the possibility of alternative "facts") contradicted their beliefs, the more argumentative and intransigent they became.

Does this not sound all too familiar? It does, at least to me....

Oct 2, 2012 at 11:06 AM | Unregistered CommenterRichard

Totally off topic but maybe a seed for another subject. There was an item on the Today programme (Radio 4 - 2/10/12 ~08:50 onwards) about using excess electrical power, such as might be available in high wind conditions etc, to liquify air and then, during high demand periods, re-evaporate it drive generator turbines. They were claiming it's 70% efficient. Just wondered if this had been discussed here before and would be interested in anyone who had a balanced view on its pros and cons.

Oct 2, 2012 at 11:32 AM | Unregistered CommenterMike A

Thanks for telling that great war story Richard (good name - but no relation!) What an excellent thread this is turning out to be. I'm grateful to Nic and HaroldW for further education on the challenges of defining and measuring sensitivity or anything approximating to it, especially the reference to Gillett et al, showing that even the 1900 (cold) and 2000 (hot) end points have in effect been cherry picked. Not that the Gillett is beyond criticism - Pielke Sr did a pretty good job of that on 27th January, returning to a Feynman-like critique of model-driven science, as well as the difficulties of capturing and making meaningful something as slippery as a globally averaged temperature anomaly.

I've been catching up mean time with Judith Curry's vital thread Academic versus professional perspectives, prompted by Latimer Alder, and Richard's experience further reminds me of one of my favourite stories of successful forecasting in the commercial world.

A well known supermarket chain asked a small, boutique forecasting outfit, known for its use of advanced techniques, partly owing to its founder's Cambridge maths background, which of its stores would be the first to recover from the drastic slump in demand for British beef during the BSE crisis. In fact it was more specific than that: it asked this company, as a test, to forecast the ten stores that would be first to recover. The boffins thought about this, discarded their advanced stats, and simply asked where in the UK there were the most households without growing children, which they guessed would be the least risk-averse in this area. A simple bit of demographics and a list of predicted top ten stores was presented to the client. For some reason the supermarket believed the answer, ordered extra beef in those stores and made an absolute fortune as they were exactly the ones to recover sales. Astounded, for some time after they only used the small firm concerned for all their forecasting needs, meaning it was soon a considerably larger one.

This is what the real world is like and just sometimes one gets it right, in a completely verifiable way, in a way Richard Feynman himself would appreciate. Climate science suffers from the worst of all worlds somehow in being locked in academia and in its addiction to almost unverifiable forecasting, at least within most human beings' lifetime, let alone within the lifetime of a parliament in a democratic country like ours.

Oct 2, 2012 at 11:49 AM | Unregistered CommenterRichard Drake

Richard: "Does this not sound all too familiar? It does, at least to me...."

Great comment. There certainly are strong indications that certain models are being championed.

The other key point is that you repeatedly mention experience ... you tied your model to experience even when that was not strongly supported by theory.

What we see in climate is that theory is pushing models, and the reason we sceptics are so hated by climate academics, is that we constantly hark on about what is actually happening (which doesn't match their models).

This is perhaps the big demarcation line. On the one side are the academics who as you put it are " "true believers" in the righteousness of their policy/technology", on the other are the "pragmatists" who look to see what is actually happening and dismiss models and modellers who don't have a proven record of being able to predict the real world.

But far worse than the model not matching what happens, is when the modellers are in denial about the failures of their models. That is the point at which you do not even need to understand the model to know it is useless.

Oct 2, 2012 at 11:56 AM | Registered CommenterMikeHaseler

Nic and HaroldW,

If you want a comprehensive list of TCRs & FRTs exhibited by the models in the IPCC 4th Assessment, you might be interested in this paper. Table 1 and Figure 3 are the relevant bits :-)

Oct 2, 2012 at 1:26 PM | Unregistered Commentertilting@windmills

tilting@windmills, I think this is the relevant bit::

"This figure makes clear that the origin of the discrepancy between the IPCC ensemble and the climate system properties consistent with recent climate observations lies in the distribution of TCR"

In other words they have just made up TCR and are now finding their duff figures don't match what is actually happening.

You can't get from 1°C rise from the greenhouse effect of CO2 to 4.5°C without being a complete moron. 450% scale up, based on no science at all in order to get the models to fit the past, ignoring all the other probable causes and denying a whole generation of understanding of noise and variability.

It was not bad science, it was criminally incompetent science.

Oct 2, 2012 at 1:51 PM | Registered CommenterMikeHaseler

Thanks Richard D and Mike H for your excellent follow-on comments. We are, of course, just wee mortals when saying in paragraphs what Feynman said in 1 minute and 3 seconds.

Richard G

Oct 2, 2012 at 1:53 PM | Unregistered CommenterRichard

Just a post to thank Chris for participating here. It's nice to hear from a modeller, whatever some of us may think of models in general.

Chris comes across as eminently reasonable, and this probably explains his comment upthread on comments that the IPCC may have, um, exaggerated in AR4:

"If what you say about the IPCC is true, I presume the climate scientists' views will change in the 5th assessment report."

Chris - not everyone is as reasonable, honest and flexible as you are!

Oct 1, 2012 at 4:08 PM

Oct 2, 2012 at 2:08 PM | Unregistered Commentercui bono

A fascinating thread - I was particularly taken by the Dave Salt / Mike Haseler exchange. Re climate models, it occurs to me that someone might be able to help me. Here's why:

At the AGM last week of a charity of which I'm a trustee, I had a disagreement with a speaker one of whose slides I had challenged. (He's a reader in psychology at a local university who makes presentations about climate change.) Subsequently, I sent him an email requesting a reference to "published, peer-reviewed empirical (real-world, not theoretical or computer-based) evidence, accessible for confirmation by independent researchers" that supported his position.

He replied by saying there was a "serious epistemological problem" with my request, suggesting that I "misunderstand the nature of hypotheses in relation to climate observations". He continued:

I think your comment about “computer based” reveals a basic error. In regard to a system as complicated as the climate, the computer models ARE the hypotheses that seek to explain the observable facts. While there are disputes about which facts need explaining, and what those facts are, there is no epistemological debate that hypothesised explanations for those facts will come in the form of “computer based” models and simulations. It is not the evidence that is computer based, it is the hypotheses. Theoretical development does not proceed by just collecting facts; it comes from testing hypotheses against those facts (i.e., pieces of data that best need explaining). I don’t understand your point, therefore, but refer you to Karl Popper, Imre Lakatos, and Quine & Duhem on philosophy of science.

This seems to me to be nonsense: how can the computer models be the hypothesis? And, to claim that the hypotheses explain the observable facts surely turns the Scientific Method on its head? He says he doesn't understand my point - well I certainly don't understand his. But I'm a lawyer and could well be getting it wrong. I plan to reply - so can anyone help?

Oct 2, 2012 at 2:17 PM | Registered CommenterRobin Guenier

Mike - perhaps you didn't read the next sentence?

"There is, however, a clear bias towards lower values of TCR in the IPCC ensemble relative to those values that are consistent with recent observed climate change."

Clearly you think the models overestimate TCR, but I wouldn't quote this particular paper in support.

Oct 2, 2012 at 2:42 PM | Unregistered Commentertilting@windmills

Robin: have a look at the Roger Pielke Sr article I cited earlier:

At some point, the entire climate science community is going to realize that models are just hypotheses

Your psychologist friend is fine on that. The vital piece to add, again from Pielke:

Scientific rigor requires that real world observations be used to test the models, not the other way around! It is inappropriate to use multi-decadal climate model predictions (even in a hindcast mode) to make conclusions on real world attributions without such an observational validation.

Save your money on the Popper, Lakatos, and Quine & Duhem for now and perhaps read Pielke's short June 2009 piece Short Circuiting The Scientific Process, cited in the one in January. Others may have better suggestions.

Oct 2, 2012 at 2:48 PM | Unregistered CommenterRichard Drake

Scientific rigour starts with simple observations and sensible analysis. An example of how not to do it is in this Met. Office propaganda:

'Heat islands exist because the land surface in towns and cities, which is made of materials like tarmac and stone, absorbs and stores heat. That, coupled with concentrated energy use and less ventilation than in rural areas, creates a heating effect.'

Wrong; radiation and convection are coupled. Reduce convection by erecting walls; to maintain constant convection + radiation, temperature has to rise. The beach windbreak is a good example. These people haven't looked at basic heat transfer. They do not admit the Earth cannot emit IR as an isolated black body in a vacuum, without which the Aarhenius 'GHG blanket' idea cannot work. Think about it.......'

Oct 2, 2012 at 3:18 PM | Unregistered CommenterAlecM

Maybe the models will be adjusted over time to explain all observable facts that arise.
Tax has already started, of course

Oct 2, 2012 at 3:39 PM | Unregistered CommenterAlan Reed

Sorry for the lack of response. I've been having problems with the Bish's comments system.

S Matthews: Yes, the positive impacts are allowed for in PAGE.' In PAGE09, extra flexibility is introduced by allowing the optional possibility of initial benefits from small increases in regional temperature'. See

Geoff Sherrington: Sorry, I misunderstood the point in your first post. Yes, there could well be other effects of increased CO2, but the PAGE09 model only calculates impacts from regional temperature change and sea level rise.

Tim Curtin: You would be quite at liberty to set the carbon cycle feedback in PAGE09 to zero if you used the model, as long as you were happy justifying why you did it.

cui bono: Thank you, and to all the commenters for a productive dialogue.

Oct 2, 2012 at 4:41 PM | Unregistered CommenterChris Hope

Chris Hope: I suggest that you might consider an appendix incorporating new work experts on heat transfer and IR physics, who are concluding the 'consensus' is based on, at best, amateur physics and that as we head into the new Little Ice Age, the Picture will be very different.

Key areas are offsetting by increased CO2 of much shorter growing seasons. Also, much higher mortality rate as LIA type winters affect the vulnerable. A starting point might be John Evelyn's LIA diaries of the 1690s where he wrote of 'treeless springs', with bud-break in early June. There is no proof of any CO2-AGW and when you correct the 'consensus' physics there can be very little, if any.

Oct 2, 2012 at 5:09 PM | Unregistered CommenterAlec

Re Roger Pielke senior's criticisms of Gillett et al 2012, being:

i) a surface temperature record back to 1851 which not spatially representative and has unknown biases with respect to the changes in local conditions where the temperature measurements were made during this time period (e.g. see Fall, 2011),


ii) a model is used for the attribution study of the forcings, yet these models do not have all of the first order climate forcings and feedbacks accurately represented (e.g. see NRC, 2005).

I share his views entirely on point ii). It seems more likely that the attribution study overstates rather than understates sensitivity to greenhouse gas forcing.

I am somewhat less concerned about his point i), since Gillett's results were much the same whether data spanning 1851-2010, 1851-2000 or 1901-2010 was used. Only the commonly used 1901-2000 period produced a substantially higher estimate of TCR.

Some climate scientists are pushing TCR as an alternative to climate sensitivity. But it doesn't really reduce uncertainty much if ocean heat uptake is modest, as it seems increasingly clear it is. The same problem ii) mentioned by Pielke is a major issue with most, if not all, TCR studies. If the models are attributing to greenhouse gases warming actually caused by something else (eg natural variability and/or solar influence), it is false to assume that further increases in CO2 levels will be accompanied by a temperature rise commensurate with their relationship over the last half century or so.

Oct 2, 2012 at 5:31 PM | Unregistered CommenterNic Lewis

Thanks for the reminder that the Andrews and Allen paper has an analysis of climate sensitivity (ECS), TCR etc for the IPCC AR4 CGMs. I was looking at that paper recently, but concentrating on their simple climate models: have you noticed that the one they apply to analyse the IPCC GCMs is quite different from the (very unrealistic - no deep ocean) model they use to illustrate relationships involving ECS, TCR and other variables?

Oct 2, 2012 at 6:08 PM | Unregistered CommenterNic Lewis

Chris Hope: That appendix may want to wait till the much-anticipated first paper demolishing existing climate radiative physics is finally published by the person suggesting it at 5:09 PM (one assumes - the four letter moniker this time would make it four different pseudonyms used in these parts, by a conservative estimate). Lord Beaverbrook earlier looked forward to that landmark day but I think we can say with certainty that nobody on Bishop Hill looks forward to it more than I.

Nic: I didn't read Pielke Sr in point (i) querying the 1851 choice per se - I think he would agree with the authors and with you on the choice of 1901-2000 being an unfortunate one. I think it was the much more general gripe about how we measure Earth's temperature consistently via anomalies. Essex and McKitrick are on the extreme end of the spectrum in saying it's meaningless to try and from what I pick up Pielke Sr isn't far behind. But I can be wrong about what was meant by him here.

Oct 2, 2012 at 6:21 PM | Unregistered CommenterRichard Drake

Chris Hope's comments and positive participation on this thread have been exemplary, and deserve acknowledgement.

His Cantab bio highlights the following

'Dr Hope was the specialist advisor to the House of Lords Select Committee on Economic Affairs Inquiry into aspects of the economics of climate change, and an advisor on the PAGE model to the Stern review on the Economics of Climate Change'.

That Lords Select Committee, which assessed climate change in great depth, taking evidence from a wide ranging spectrum of opposing viewpoints, produced what was perhaps the most honest attempt our Westminster Government has ever achieved to try and audit it.

Whether they were successful or not is debatable, but their report stands as an archive from 2005 as far as I can see as the only valiant effort to do so. It is linked below, and contains penetrating criticisms in places of IPCC process, for instance para171. 'We can see no justification for an IPCC procedure which strikes us as opening the way for climate science and economics to be determined, at least in part, by political requirements rather than by the evidence. Sound science cannot emerge from an unsound process.

Chris Hope I am sure is well aware of the remit of the IPCC and his comment about the IPCC at 4:08 PM Oct 1, 2012 was uttered, I am certain, with tongue firmly in cheek.

Oct 2, 2012 at 9:21 PM | Registered CommenterPharos

Pharos: "Chris Hope's comments and positive participation on this thread have been exemplary, and deserve acknowledgement. ",

Pharos, with Kyoto about to collapse apparently due to a lack of political trust in climate "scientists", I would need to see far more to say "exemplary".

I will be blunt.Chis Hope is one of the many "scientists" who either stood by or actively encouraged others to:

1. Liken us sceptics to paedophiles and sex slavers (John Bell BBC Today)
2. Repeatedly and falsely categorise us as "deniers" even though almost all sceptics accept the 1°C greenhouse warming effect of CO2 and acknowledge warming in the 20th century.
3. Suggest that we be locked up in concentration camps
4. Denied sceptic Scientists funding
5. Forced people out of jobs and/or prevented groups employing young researchers
6. Bullied editors to resign if they printed sceptic papers
7. Lied to inquiries about their research
8. Stole papers from the Heartland Institute
9. Set out to denigrate those who disagreed with them by suggesting we are conspiracy theorists.

And generally he and his colleagues stood by getting paid handsomely at our expense, falsely stated overwhelming certainty where it did not exist, when they either knew their own colleagues had been involved in malpractice or turned a blind eye to said malpractice ... and then allowed their colleagues to insult us, to denigrate us, to try to humiliate us, to suggest we were in the pay of BIG OIL, when we sceptics in our own time at our own expense went through the procedures to get this investigated, they then allowed their colleagues to be "vindicated" when there was nothing further from the truth.

Come on. No one can deny breaking FOI law is a crime - you cannot break the law and be "vindicated", yet Chris Hope and all his colleagues stood by and encouraged this view that climategate had vindicated those involved. THAT IS WHY KYOTO DIED!

Now, they come to us, not with apologies from their behaviour in the past, nor even an admission of their failings, but yet again to push their view at us one more time in the hope that someone will believe them before the whole thing falls apart when Kyoto collapses on the 31st December.

The main point, however, is that there are still academics in Universities in the UK who fear making it known they are sceptics because people like Chris Hope are able to make life very difficult for them. And, I do not mean to suggest that Chris Hope is actively writing letters suggesting sceptics be sacked. That is not necessary, he only has to remain silent whilst he allows the zealots he knows exist in the subject to wield the axe.

Oct 2, 2012 at 10:00 PM | Registered CommenterMikeHaseler

I said on this thread, Mike.

Oct 2, 2012 at 10:08 PM | Registered CommenterPharos

Pharos, OK - I'm probably just miffed. Read Chris's paper and it really needs far more time than I can give it. Then today I felt obliged to ask for an invitation to an event at the Royal Society. More than likely they'll just snub me which will be good, because it is just an event where if I went I would be treated like a pariah by people like Chris Hope ... and to boot I'd have to fork out approaching £200 of my own money just to attend. That's £200 my kids will not have for Xmas. I'm just not in a very charitable mood -- particularly when I see academics bearing papers (greeks, gifts)

Oct 2, 2012 at 11:43 PM | Registered CommenterMikeHaseler

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>