Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Shale gas dropped? | Main | Is it or ain't it Rashit? »
Sunday
May202012

Myles Allen on Berlin's two concepts of liberty

Simon Anthony sends this report of Myles Allen's recent lecture at Oxford.

Myles (I think he'd prefer I call him "Myles" rather than Prof Allen as most people in the audience seemed to refer to him thus) is prof of geo-system science in the school of geography and the environment and heads the climate dynamics group in physics dept, both in Oxford. His main interest has been in attribution of aspects of climate, particularly "extreme events" to human activities. Recently he's been working on how to use scientific evidence to "inform" climate policy.

The lecture's title comes from Isaiah Berlin's contrast between "negative" and "positive" liberty. These can be (slightly) caricatured as, respectively (and perhaps contrarily) freedom from constraints (eg tyranny) and freedom to do particular things (eg vote for the tyrant). Amongst other things, Berlin was concerned about the possible abuse of positive liberty in which the state prescribes what is permitted rather than ensuring the conditions in which individuals were free to make their own choices.

Myles contrasted two extreme views of how to address climate change: either continue as currently so 0.001% of the world's population choose to benefit from emissions of CO2 and the poorest 20% involuntarily suffer the consequences or halt emissions and so demolish the capitalist, liberal, market system. In conversation afterwards he accepted this was a rhetorical flourish rather than a genuine choice. 0.001% of the world's population is ~700,000. He said this number was those who profited directly from extraction and burning of fossil fuels. But it omits shareholders or citizens who benefit from taxes paid by oil cos etc. And it omits those who, for example, drive or keep warm or light their houses. If these people were included, the number of beneficiaries would likely be rather more than the number suffering. So it seems more than a little disingenuous to characterise the "sides" in these terms. In any case, rather than have states impose strict controls, Myles wanted to investigate means by which emissions could be voluntarily curtailed and suffering compensated through negative liberty.

So, he says, assume that IPCC's predictions are correct but it'll be 30 years before confirmation. What measures can be taken to reduce CO2 emissions? Offsetting doesn't work because what counts is cumulative emissions, not the rate. Centrally imposed limits would potentially mean big opportunity costs as beneficial activities might not be undertaken. Is there instead some means by which the impacts can be traced back to CO2 emissions and the originators made to pay (cf Deep Water Horizon)?

An essential component of any such scheme is that harm caused by climate changes should be correctly attributed to fossil fuel CO2 emissions. If that were possible then, on a pro rata basis of some kind, those companies responsible for the emissions and which had directly benefitted from extraction and burning of fossil fuels (oil, coal, gas, electricity, car manufacturers, airlines...) could be penalised and the proceeds distributed to those who were judged to have suffered.

Now Myles (I think somewhat inconsistently) seemed to accept that climate predictions for 30 years into the future were unverifiable, unproven and unreliable (perhaps not surprising when, as Richard Betts confirmed in another thread, even when the Met Office has the opportunity to assess its 30+-year temperature anomaly predictions in, for example, forecasts made in 1985, it chooses not to do the tests. One can only speculate as to why that might be.) He also accepted that the public might justifiably not believe the assurances of climate experts, particularly given the patchy record of mighty intellects in predicting the future (examples he gave were Einstein post-war seeing imminent disaster unless a world government was immediately set up; a Sovietologist who in the mid-1980s confidently predicted the continuing and growing success of the Soviet Union; 30-year predictions of US energy use which turned out to be huge overestimates and Alan Greenspan's view that derivatives had made the financial world much secure. I'd have been tempted to add Gordon Brown's (or George Osborne's) economic predictions but time was limited.) There was very little reason to expect people to believe in the extended and unfeasible causal chain leading to predictions of temperatures in 30 years time

Instead Myles proposed that the frequency and pattern of "extreme" events was now well enough understood that the effect of CO2 emissions could be reliably separated from natural variations. He gave various examples of how models had been validated: the extent of human influence on the European heatwave of 2003 has been "quantified"; the Russian heatwave of 2010 was within the range of natural variation; model predictions of annual rainfall in the Congo basin matched uncannily well the "observations" (Myles himself initially doubted the extraordinarily good match, although he now accepts it's genuine. However, the "observations" weren't all one might expect because conditions for meteorologists in the Congo are understandably difficult, so there aren't any actual measurements. Instead an "in-fill" procedure was used to extend readings from safer locations to the Congo basin. I asked whether this agreement between a model and, um, another model was really a good test of either. Myles assured me that both models were reliable and show good agreement with measured data in, for example, western Europe. Still, an odd way to illustrate reliability of model predictions.).

So although it wasn't possible reliably to predict climate to 2050, current near-term regional forecasts may be good enough to show that the probability of extreme events was changed by CO2. In any case, the people who believe they've been adversely affected by climate change are free to take legal action against the companies they believe are responsible. Myles foresaw such litigation growing as the effects of climate change became more apparent.

An obvious question arises, rather like the "dog that didn't bark": if the evidence for the effect of AGW on extreme events is as strong as Myles and others claim, why haven't class actions already been brought, particularly in the US? "Ambulance chasing" lawyers aren't renowned for their reticence but so far there has been no action of great significance. I don't think it's wild speculation to suggest that lawyers have examined possible cases but haven't yet thought the evidence strong enough to make it worth while proceeding. Of course at some stage such cases will come to court and then Myles may find that his hope that they'll change the "climate" of debate will cut both ways. Because if a major class action against, say, oil companies claiming compensation because the 2003 European heatwave was due in part to CO2 emissions, was brought and failed, it would be a major setback to hopes for international laws to limit further emissions. While litigation won't advance science, it could be very politically significant - as well as entertaining - to have the arguments for AGW tried in court.

Finally, having been to three of the Wolfson lectures on climate change, I'd like to add a couple of observations. First, although all the speakers talked about the evidence for AGW, not one of them mentioned hockey-sticks. Stocker came closest when he said that current temperatures were the warmest for 500 years but didn't venture an opinion on the medieval warm period. I wonder whether it's too much to hope that the more scrupulous climate scientists are distancing themselves from the petulant antics and inept science of hard-core "Team" members. And second, two of the three speakers (Wunsch and Allen) said that there was little reason for people to believe that 30-year climate predictions were reliable. So perhaps the better climate scientists will stop relying on magical trees and statistics to make up the past and dubious models for scary futures. Instead they might try to do what Myles advocates and concentrate on shorter term understanding of the climate which might at least be testable.

PrintView Printer Friendly Version

Reader Comments (204)

Roger

Also it needs to be taken into account that the MO's Decadal Forecast/Prediction anomalies are made relevant to climate “norm” 1971-2000 whereas HadCRUT3 is relevant to climate “norm” 1961–1990. The difference is approx +0.12C, therefore a +0.35C “Decadal Prediction/Forecast” is the equivalent to approx +0.47C HadCRUT3.

May 20, 2012 at 8:52 PM | Registered CommenterGreen Sand

Thanks again GS. Why is it so difficult to compare apples with apples?

I think that every MP who voted for the Climate Change Act should be made to look at these two graphs and asked if this sort of methodolgy justifies an expense of billions of pounds every year.

May 20, 2012 at 9:24 PM | Unregistered CommenterRoger Longstaff

"Why is it so difficult to compare apples with apples?"

Roger, I could go on for hours and yet probably from holding the MO in a state of reverence in my youth I still tend to promote incompetence in presentation rather than obfuscation.

However the last Decadal Forecast really does raise questions:-

Why does it now start in 1950 when it is only produced to show data from 1985? All this does is truncate the relevant (1985 onwards) period making it more difficult to assess.

Why are these decadal forecasts never updated prediction v actual? They are produced from 120 monthly predictions therefore there can be no reasons why their performance is are not related on a monthly basis?

Why do they always show hindcasts and never previous forecasts?

Anyhow enough, we watch and learn, and I do have a problem as I am beginning to see the output of the MO unit as strictly policy related science. Nothing but nothing is produced that even borders on questioning the mantra. However there is an increasing number of references to “uncertainties” and that alone should make any scientist’s antenna twitch.

May 20, 2012 at 10:50 PM | Registered CommenterGreen Sand

OK, I looked at the paper referenced by Richard Betts.
http://journals.ametsoc.org/doi/pdf/10.1175/JCLI3731.1

Climate science papers often seem to have:
- lots of authors (7 here)
- lots of pages (20 here)
- lots of references (60 here, if I counted right)

They don't make easy reading. (This one is far more readable than many.) I'm guessing it's:
- partly because they contain a lot of things that the authors want to get across,
- partly because that happens to be the accepted style in the field,
- partly because, with lots of authors, it's even harder to make a paper succinct than with just one or two authors.

To read a paper like this one in detail and to appreciate the significance of every line in the paper would take a lot of time. But it's necessary to appreciate that this paper is a report on the use of the HadGEM1 model. It is not a report on the testing and validation of the model.

Section 2 describes in overview the modules that comprise the HadGEM1 model. The way the modules were tested individually and the way they were tested jointly after having been integrated is not touched on. Nor is the way the models were "parameterised" (to use the climate science terminology) to incorporate effects that are insufficiently well understood to be modelled directly from physical principles and so have to be modelled using an empirical formula with parameters adjusted to reproduce observed effects.

I think that section 3 "Simulation of past near-surface temperature changes" is the part that is best described as "comparison of long-term climate simulations with the Met Office model against observations".

They compare "annual mean (ie temporally averaged) global mean (ie spacially averaged) temperatures", from simulations, against the instrumental record. They do this with the simulations incorporating just man made effects on climate ("ANTHRO") and with both natural and man-made effects ("ALL") . Time runs from 1870 to 2004.

The output of the simulation runs follows well the instrumental record. They conclude that the inclusion of man made effects is necessary to reproduce the instrumental record.


Was I convinced that this validated the HadGEM1 model?

No, I'm sorry, I was not convinced. Who can doubt that the observed climate and other observations were used in parameterising the components of the model? Who can doubt that, prior to being used for these "experiments" (as the authors refer to their runs of their model), its outputs had already been compared to instrumental records and the reasons for discrepancies pinned down and eliminated?
As I've said before, if the model could not reproduce the known climate history, irrespective of the correctness of its physical models, that would be a sign of outright incompetence in its programmers.

For some more information on Met Office climate modelling software, see: Engineering the Software for Understanding Climate Change

May 20, 2012 at 11:13 PM | Registered CommenterMartin A

May 20, 2012 at 10:50 PM | Green Sand

Why do they always show hindcasts and never previous forecasts?

Because that's how weather forecasting works at the MetO. If you look at their surface pressure chart or any aspect (weather, wind, temp etc) of their forecast, they are 'updated' on a regular basis. Ultimately, they are only forecasting 3 hours ahead because up until that time they can change the forecast if necessary. Same applies to the pressure charts; I'm sure by the the time the 84 hour prognosis becomes the current chart it will have changed significantly. You can bet your boots they count their accuracy statistics from the last forecast issued not the one made 84 hours in advance.

May 21, 2012 at 12:16 AM | Unregistered CommenterBilly Liar

May 21, 2012 at 12:16 AM Billy Liar

I am sure you are correct. If I really want to know about the weather in the UK I listen to the Shipping Forecast.

If you want to get a handle on just how confident the UK MO is ask them why they have the need to update the Shipping Forecast every 6 hours?

When “those in peril” feel happy with a 3 day punt then maybe just maybe I might start paying attention.

May 21, 2012 at 12:29 AM | Registered CommenterGreen Sand

I still would like to know whether the model output is presented unchanged for the short-term forecast, or do experienced eyes look it over and adjust it, using observations, radar and, well, experience.

May 21, 2012 at 8:31 AM | Unregistered CommenterRhoda

Martin,

I looked at the paper you referenced. I was struck by the following text:

"Overall code quality is hard to assess. During development,
problems that prevent the model running occur frequently,
but are quickly fixed. Most of these are model configuration
problems rather than defects in the code. Some
types of error are accepted as modelling approximations,
rather than defects. For example, errors that cause instabilities
in the numerical routines, or model drift (e.g. over a
long run, when conservation of physical properties such as
mass is lost) can be complicated to remove, and so might
be accommodated by making periodic corrections, rather
than by fixing the underlying routines."

It all starts to fall into place. On another thread R. Betts referenced a paper that described the use of low pass filters that were designed to reproduce the variability that was being sought, now we find that they can "press the reset botton" when the model becomes unstable.

I am more and more convinced that this is a completely worthless exercise.

May 21, 2012 at 9:54 AM | Unregistered CommenterRoger Longstaff

Your Grace,

I think that you should write another book. How about (as a working title) "The Climate Model Illusion" ?

May 21, 2012 at 10:08 AM | Unregistered CommenterRoger Longstaff

"and so might be accommodated by making periodic corrections, rather than by fixing the underlying routines."

I love it - a rare laugh-out-load moment when reading about climate modelling.

As a long-term software developer, it would be so nice if when running a soak test I could simply restart the test when it crashed, and only count the stats during the times when the test wasn't crashing. In the real world, however ....

Exactly how do the modellers include these "periodic corrections" into the projections for the next 30 years ?

May 21, 2012 at 12:06 PM | Registered Commentersteve ta

OK, I too have now read the paper that Richard Betts pointed us to, in response to Simon's parenthetical note that:

(perhaps not surprising when, as Richard Betts confirmed in another thread, even when the Met Office has the opportunity to assess its 30+-year temperature anomaly predictions in, for example, forecasts made in 1985, it chooses not to do the tests. One can only speculate as to why that might be.)

OK, so there are lots of opportunities to misunderstand things in blog dialogues. But the 2006 paper by Stott et al. is not a post hoc test of how well predictions by climate models stand up to time for a thirty-year prediction horizon. It couldn't be: it is about a new model, one that did not exist in 1976, as it would have had to have done, were it to have made a prediction on the thirty-year timescale.

At one level, you can understand that people don't want to dig out predictions from the 70s and 80s and compare them to observations. The older models were pretty crude, without all the bells and whistles that newer ones have. The paper Richard referenced described some of the new features in the new model. But on the other hand, the predicted sensitivities of newer models are not all that different from those of older models, so the predictions would at least be interesting to look at... There's the famous case of Hansen's A - B - C predictions, which is well worth looking at, after all. So I did a bit of digging and tried to find papers from the 80s or even 90s from the Met Office showing predicted changes in global temperature. This was an interesting exercise: I could not find any such papers. Clearly, there was lots of work being done on modelling, as far back as the early 90s and probably before. There's a paper in Science that confirms Martin A's lemma whereby climate papers have lots of authors, and there's another one in the Journal of Climate that doesn't. Both studies seem to have gone straight for sensitivity modelling: they double CO2, then see what happens to T. I could not find any paper which provided a detailed prediction of what would happen to future temperatures assuming a certain emissions scenario. Again, that may not be so surprising: given the state of models at the time, it must have been hard and somewhat pointless to try to initialize them with 'current conditions', so going for the simpler task of modelling CO2 doubling was probably more meaningful at the time. It may also be that I haven't looked well enough.

But if there are indeed no such documented predictions, then this raises another interesting question: what on earth are the white lines in the plots shown at the Met Office website, e.g. this one? The website has no reference for these plots, but makes them sound like predictions (not post-dictions) that were carried out ahead of time. E.g. the Figure legend says "Previous predictions starting from June 1985, 1995 and 2005 are shown as white curves, with red shading representing their probable range, such that the observations are expected to lie within the shading 90% of the time." But I can't see any references on that webpage, and my trawl through the literature could find no predictions of this type. Looking again, the wording is so vague that I suspect that these are in fact postdictions, hindcasts carried out using the new models.

Can Richard elucidate what those plots actually show?

May 21, 2012 at 12:11 PM | Registered CommenterJeremy Harvey

May 21, 2012 at 12:11 PM Jeremy Harvey

"I suspect that these are in fact postdictions, hindcasts carried out using the new models."

They are, I have discussed this with Richard before and he pointed me to the "Verification" paragraph towards the bottom of the page:-

"Verification"

"Retrospective forecasts have been made from numerous dates in the past. Some of these are shown in the top global annual temperature forecast figure (white curves and red uncertainty regions from 1985, 1995 and 2005)."

He also agreed that the wording below the chart "Previous predictions starting from June 1985, 1995 and 2005 are shown as white curves" is misleading and he was going to have a word with those concerned.

He also said that he would look at the performance of previous actual forecasts but did say that it would take some time. When I get time I will trawl back and find a link to the discussions I had with Richard.

May 21, 2012 at 1:33 PM | Registered CommenterGreen Sand

Those plots are hindcasts. In other words decadal projections of globally-averaged surface temperature run with current models using the information that would have been available at the start time of the hindcast.

These are very useful simulations in the light of attempts to assess the possibilities for projecting climate over decadal periods, and especially with the desire for regional forecasts to inform industrial, agricultural, fisheries applications. These simulations allow the extent to which natural variability modulates the expected persistent changes resulting from a broadly constant external forcing (greenhouse gas forcing).

Note that one doesn't expect these forecasts necessarily to do a good job, although the forecasts with their uncertainties might be expected to encompass the range of variability seen during the real evolution of the climate over the period. Likewise the "forecasts" allow assessment of important factors like the contribution of the initial climate state to influence the progression over the subsequent decadal projection.

It's obvious that natural variation dominates decadal excursion of the climate system even under the influence of a persistent external forcing, and so far as we know the dominant contributions to decadal variability (e.g. ENSO and volcanoes) are stochastic and essentially unpredictable at short timescales (even if we expect that ENSO effects average out to near zero on multi-decadal timescales). In fact we are likely much more confident in the predictability of surface temperature on multi-decadal timescales (say 30-100 years given a particular emissions scenario), than over a decade.

So a very important exercise in my opinion. If we're ever going to asess and then improve our abilities to make decadal forecasts it will be to a significant extent through the efforts of this sort of modelling combined with real world measurement.

May 21, 2012 at 1:39 PM | Unregistered Commenterchris

Jeremy,

Here is a Met Office climate prediction that was published in 1998:

http://wattsupwiththat.files.wordpress.com/2012/04/cop4.pdf

May 21, 2012 at 1:49 PM | Unregistered CommenterRoger Longstaff

Green Sand: Thanks, and doh. I should have remembered that you had had that dialogue with Richard, and that he had accepted that the wording was misleading. I think one item in my post above remains pertinent, though: there are very few multi-decadal predictions that are worth verifying at this time, because not many predictions of this type were made more than ten years ago. All of this means that if you want to believe statements such as chris's:

"In fact we are likely much more confident in the predictability of surface temperature on multi-decadal timescales (say 30-100 years given a particular emissions scenario), than over a decade."

You will be better off reading it through rose-tinted spectacles...

May 21, 2012 at 2:07 PM | Registered CommenterJeremy Harvey

Jeremy,

Google: "Met Office - COPing to predictions" for a WUWT article on historical MO predictions. It contains predictions published by the MO dating back to 1998.

(I tried, and failed, to post the link)

May 21, 2012 at 2:10 PM | Unregistered CommenterRoger Longstaff

@ Jeremy Harvey

Previous relevant discussions with RB can be found here:-

http://www.bishop-hill.net/blog/2012/3/14/climate-hawkins.html#comment17348994

You will need to scoll back to pick up the thread.

There is also the "Questions for the UKMO2 discussion thread:-

http://www.bishop-hill.net/discussion/post/1727950

May 21, 2012 at 2:11 PM | Registered CommenterGreen Sand

Thanks, Roger. Hard to read what the prediction was on the scale they have used - though to my eyes at least they seem to have predicted a stasis in global temperatures in the early 21st century - well done! I guess they started making specific predictions using all the various models round about the late 90s.

May 21, 2012 at 2:12 PM | Registered CommenterJeremy Harvey

Jeremy

"because not many predictions of this type were made more than ten years ago"

Not sure, The First Assessment Report (FAR) of the Intergovernmental Panel on Climate Change (IPCC) was completed in 1990 and that was based on model forecasts. The formation of the IPCC in 1988 was a result of model forecasts. All I want to know is how they have performed over time.

I have no issues with hindcasts, I understand the benefits they bring but they do not answer the simple question:- Is "our" skill improving with time, experience and resource? Only by comparing subsequent forecasts can "our" ability be assessed and I am quite sure that they are assessed!

May 21, 2012 at 2:31 PM | Registered CommenterGreen Sand

I agree Jeremy, it is hard to see what they actually predicted over the last 12 - 13 years, however, my eyes read it very differently from yours "they seem to have predicted a stasis in global temperatures in the early 21st century - well done!". All I see is a relentless rise in temperature (that did not happen), but there again, I do need glasses!

May 21, 2012 at 2:37 PM | Unregistered CommenterRoger Longstaff

May 21, 2012 at 2:07 PM | Jeremy Harvey

That's obvious 'though isn't it Jeremy? If greenhouse gas forcing results in a persistent surface temperature contribution that is of the order of 0.15 - 0.2 oC per decade and year on year variability from ENSO, solar (somewhat predictable) and volcanic activity can be as much as 0.1 - 0.2 oC, then we are totally unsurprised (in fact we expect it) that decadal surface temperature variability is dominated by the internal contributions. As these average towards zero (ENSO, volcanoes unless we have a persistent bout of eruptions, and solar (at least the solar cycle component) then the contribution from the forcing continues to rise out of the noise on longer timescales.

That's pretty much exactly what we've observed over the last 30-odd years...the temperature drifts upwards on a background of internal variability. Since the greenhouse forcing hasn't changed very much, that's also pretty much what we expect going forward.

I guess the million dollar questions re decadal projections relate to the extent to which the apparently stochastic internal variability actually have some non-stochastic components that we can learn to predict. In other words our decadal projections might always be highly probabilistic with perhaps the major advance being to narrow the range of likelihoods.....or whether we can make more quantum-style advances in decadal predictability...studies like the one we're discussing are the major means of finding out!

May 21, 2012 at 2:49 PM | Unregistered Commenterchris

@ chris

"Those plots are hindcasts. In other words decadal projections of globally-averaged surface temperature run with current models using the information that would have been available at the start time of the hindcast."

Do those "current models" incorporate any measured data (for example in the form of derived parameter values) from during the projected period? ie is the technique only of use for "predicting" what's already happened?

May 21, 2012 at 2:52 PM | Unregistered CommenterSimon Anthony

May 20, 2012 at 12:16 PM | Simon Anthony

I'm pleased to see that some other commenters have spotted your misunderstanding.

The reason that "30-year forecasts made in 1985" have not been evaluated (either publicly or in secret) is because such forecasts never existed!

In 1985, nobody did simulations of the next 30 years with climate models. It simply wasn't possible. It was enough of an achievement to get a numerical weather prediction to run over a whole year, let alone several decades. In those days, climate change simulations were done by running one year at (say) doubled CO2 and comparing with the present-day. The excellent paper I like to cite by Sawyer which gave a forecast of 0.6C warming by 2000 compared to the early 1970s was based on interpoloating such a doubled-CO2 study (by Manabe and Wetherald) according to the estimate CO2 rise by 2000. (As it turned out the actual warming was about 0.5C over that period).

"Transient" climate simulations (ie: with a gradual year-by-year change in the external forcing) did not begin until the mid-1990s, but even then they were not suitable for comparing the first few years of the projection with observations because the specific year-to-year variability was not forecastable for named years - but that didn't matter for the long-term because it was the overall trend that was of interest.

It wasn't until 2005 that the first "initialised forecast" simulations were done, which is when observational data are used to initialise the forecasts at a very specific year - ie: the model gets kicked-off at the right place in (for example) the ENSO cycle. This is what we use for decadal forecasting.

The figures that you've seen with simulations starting at 1985 are (as Green Sand says) hindcastsnot <forecasts, ie: they were done after the fact, using only observational information that would have been available in 1985, and without prior knowledge of the Mt Pinatubo eruption). This was to test whether the new decadal forecasting techniques actually worked. The only part of those figures that are actual forecasts are those from 2005 onwards.

So, there is no "hiding" of poor long-term forecasts from the 80's. It is simply that sich forecasts could not be done at that time!

May 21, 2012 at 2:52 PM | Registered CommenterRichard Betts

Richard, Thank you for coming back.

I would be grateful if you could let me know if you think that it is reasonable practice to use models for multi-decadal forecasts that:

A. Use low pass filters "chosen to preserve only the decadal and longer components of the variability", and,

B. Accomodate errors that cause instabilities "by making periodic corrections, rather
than by fixing the underlying routines"

I would very much like your specific answers to these points.

Thanks, Roger

May 21, 2012 at 3:11 PM | Unregistered CommenterRoger Longstaff

"Those plots are hindcasts. In other words decadal projections of globally-averaged surface temperature run with current models using the information that would have been available at the start time of the hindcast."

At last - an explanation for the historical "adjustments" that seem to upset so many sceptics.

Clearly, the hindcasts from the new models are showing that for now to be as per observations, 1985 must have been different, so we have to adjust the data that was "available at the start time of the hindcast".

May 21, 2012 at 3:18 PM | Registered Commentersteve ta

May 21, 2012 at 2:52 PM | Simon Anthony

Do those "current models" incorporate any measured data (for example in the form of derived parameter values) from during the projected period? ie is the technique only of use for "predicting" what's already happened?

No!

At least, there is no tuning to get the models to forecast the change correctly. Data are used for getting the long-term average right, but tuning the climate sensitivity against the data that are then used for testing and attribution would clearly be a circular argument.

May 21, 2012 at 3:22 PM | Registered CommenterRichard Betts

Green Sand, thanks for the pointer to the Climate Hawkins thread - in fact, I had a comment on there, just above the one your link points to. I really should have remembered the discussion about prediction vs. hindcast concerning that Met Off webpage.

The comment I made at the time argued in favour of using more than just global temperature to test model predictions. As Simon Anthony suggested, you can get that right, if you are not too demanding in statistical measures of forecast success, simply by saying that the next ten years will be much like the present. Figure 3 at the bottom of the Met Office's page on decadal forecasts is an example of a more demanding test: comparing regional temperatures, model vs. observed, over a five-year period. The Figure legend says:

"The stippling shows where the observations lie outside of the 5-95% confidence interval of the forecast ensemble. This is expected in 10% of cases, but actually occurred in 31%, showing that the ensemble did not capture the true range of uncertainties."

That wording seems odd to me too. It suggests that the only reason the model did not get things right was because the ensemble of model runs underestimated the error bars. If you look, you can see that the predictions 'ran hot' in both the Southern Ocean and in the Arctic. I'm sure you could come up with a verification statistic to characterize this Figure which would be more harsh than 31% vs 10% error.

May 21, 2012 at 3:26 PM | Registered CommenterJeremy Harvey

May 21, 2012 at 8:31 AM | Rhoda

I still would like to know whether the model output is presented unchanged for the short-term forecast, or do experienced eyes look it over and adjust it, using observations, radar and, well, experience.

Hi Rhoda

Yes, there is human intervention on the short-term forecast.

Cheers

Richard

May 21, 2012 at 3:27 PM | Registered CommenterRichard Betts

May 21, 2012 at 2:52 PM | Simon Anthony

I'm not an expert in these particular models Simon or even climate models, but the parameterization of models incorporates independently-determined physics. So it's not really very easy to understand what exactly your first question means. I think it's somewhat ill-posed.

Obviously during the last 20-30 years independent characterization of greenhouse forcing; cloud effects and their parameterizations; solar forcings; the physics of atmospheric aerosols etc. have improved and the best parameterizations currently available will have been used in the modelling.

The answer to your second question is No... the technique is useful for predicting (1) what already happened (2) what might have happened under different circumstances (3) what might happen in the future according to different scenarios ...etc.

...that's the essential value of models. It's not obvious that they tell us very much that we didn't already know from basic physics and empirical observations. However they allow us to test this knowledge in application to real world phenomena, to explore a whole range of different situations (e.g. different emission scenarios etc.), and to focus our attention on apparent anomalies between observations and model predictions, to explore the possibilities for regional projections etc...

Incidentally, one thing I don't know is how the anthropogenic contributions are varied through the decadal sims. In other words does the greenhouse gas concentration rise during the decadal sims according to the existing rate of increase at the start time of the sim? I expect this could be found out with a little effort, 'though I expect it doesn't make much difference...

May 21, 2012 at 3:31 PM | Unregistered Commenterchris

Hi Jeremy

Yes indeed, the models are tested for regional skill and teleconnections and not just the global mean.

This paper may be of interest.

Cheers

Richard

May 21, 2012 at 3:33 PM | Registered CommenterRichard Betts

chris, I guess you are right in one respect: A prediction made 30 years ago that there would be a steady upward drift by 0.15 - 0.2 degrees per decade, obscured by some noise function with amplitude larger than 0.2 degrees, would not have been a very bad prediction. In fact, that prediction was made - in 1972, by Sawyer, Nature 239, 23 - 26 (01 September 1972), as mentioned by Richard above.

But if that rise by 0.6 degrees or so was due, to a greater degree than you allow, to chance, then the wider claim that you make, that multi-decadal predictions made now must be quite reliable, much more so than decadal predictions, becomes more questionable. We don't really have that many tests of the multi-decade skill of models that enable us to be sure that Sawyer was not just lucky. Going back to Simon's original post: Myles Allen was also expressing some doubt about how confident we should be in multi-decadal predictions.

May 21, 2012 at 3:42 PM | Registered CommenterJeremy Harvey

May 21, 2012 at 2:52 PM | Richard Betts

Very good and thank you for your patience in bearing with me. Putting the various pieces together I think I understand but please correct the following statements if necessary:

1: The only actual forecasts the Met Office has made for future temperatures were made in 2005.

2: The AMS paper by Stott et al, published in 2006, to which you referred me therefore didn't assess the quality of forecasts of future temperatures.

3: The 2005 forecasts were to test the decadal predictions.

4: There have been no published predictions for periods longer than 10 years.

5: In 2011 the central prediction for temperature anomaly was ~0.6 degrees.

6: In 2011 the measured temperature anomaly was ~0.35 degrees.

7: The discrepancy between the measured anomaly and the central prediction after 6 years was ~70%.

8: To counter the discrepancy, the model was reset after 6 years and new predictions made.

9: The new predictions have a central value of temperature anomaly of ~0.8 degrees by 2020.

10: The current temperature anomaly is ~0.3 degrees, lower than it was in 2011.

11: The Met Office central prediction for the current temperature anomaly is ~0.4 degrees.

12: After less than a year of the prediction period the central temperature prediction anomaly is therefore higher than the measured value by about 30%.

Assuming I've got all that right an obvious question follows.

Since the MO's predictions overestimated the temperature anomaly by 70% after 6 years, and after subsequent resetting (presumably with an improved model) by 30% after a further year, and that apparently no assessment has been done of longer term prediction, how much confidence do you have in the MO's longer term predictions?

May 21, 2012 at 3:46 PM | Unregistered CommenterSimon Anthony

May 21, 2012 at 3:22 PM | Richard Betts

You say that current models don't incorporate any measured data (for example in the form of derived parameter values) from during the projected period but then go on to add "Data are used for getting the long-term average right".

So the obvious question (I'm sure you saw it coming): does getting "the long term average right" involve using data from the projected period?

May 21, 2012 at 3:51 PM | Unregistered CommenterSimon Anthony

May 21, 2012 at 12:11 PM Jeremy Harvey

Can Richard elucidate what those plots actually show?

The Met Office website gives an email address for questions.


Contact us
You can access our Customer Centre, any time of the day or night, by phone, fax or email.
Trained staff will help you find the information you need.

By email enquiries [ "at" sign] metoffice.gov.uk

Maybe their trained staff would relish having an interesting enquiry such as this to respond to.

May 21, 2012 at 3:52 PM | Registered CommenterMartin A

Simon - I think your point 1 is incorrect - the MO have issued predictions since at least 1998 (see my post @ 1.49 pm).

Richard - If you answer my questions @ 3.11 pm I promise never to pester you again!

May 21, 2012 at 3:53 PM | Unregistered CommenterRoger Longstaff

Martin A @ 3:52pm, it turns out that I'd missed the fact that this question had been answered before, e.g. by Richard Betts himself. The plots show hindcasts - and I should have remembered that this was known when I asked my question. The rest of the thread above shows me being gently corrected by Green Sand among others!

May 21, 2012 at 4:02 PM | Registered CommenterJeremy Harvey

Martin,

Good idea! I emailed my 3.11pm question to the MO, and got the following reply:

"At the moment we are experiencing high volumes of emails, so it may takeus a little longer than usual to reply...."

May 21, 2012 at 4:10 PM | Unregistered CommenterRoger Longstaff

May 21, 2012 at 3:31 PM | chris

I'm not quite sure why you thought my first question ill-posed but fortunately Richard Betts felt able to answer a version of it. He said that models don't use parameter values derived in any way from measured data in the projected period which, unless I've misunderstood what you've written, you disagree with.

As to my second question: "is the technique only of use for "predicting" what's already happened?", that probably was badly posed. What I should have asked was whether the technique was any use for predicting the future, to which you say it's useful for predicting what might happen in the future according to different scenarios.

But if what you say is correct, and the models have been adjusted during the past 20-30 years due to measured data from that period, then any successes they have in "prediction" (perhaps "description" would be better) would have been rather circular (as Richard points out). What reason would you then have for believing that they are "useful for predicting what might happen in the future according to different scenarios"? Or rather (since inspecting goat entrails might be used to predict the future, just not very well) why should you have confidence in such predictions?

May 21, 2012 at 4:12 PM | Unregistered CommenterSimon Anthony

May 21, 2012 at 3:53 PM | Roger Longstaff

It's getting hard to keep up with all posts. I've now seen your 1:49PM post. So it seems that earlier predictions exist although, having glanced at the documents linked to by Anthony Watts, not in a very useful form. I wonder whether the MO can provide the underlying data used in those charts. If so, at least we'd have a ~14-year prediction to see how well they've done.

May 21, 2012 at 4:19 PM | Unregistered CommenterSimon Anthony

Hi Simon

A few corrections/nuances below. Much of it hinges on what is meant by forecast / prediction /projection, which is all a bit of a faff to be honest, so I will try to be clear.


1: The only actual forecasts the Met Office has made for future temperatures were made in 2005.

Sort of - the key word here being "forecast". The only initialised forecasts were made since 2005. By initialised forecasts I mean ones that attempt to capture the variability of the first few years, not just the long-term (multi-decadal) trend.

Multi-decadal projections of the long-term trend have been done since the mid-1990s.

"Step-change" studies of doubled CO2 have been done since the 1960s.


2: The AMS paper by Stott et al, published in 2006, to which you referred me therefore didn't assess the quality of forecasts of future temperatures.

Correct, it was looking at how simulations of long-term trends compared with observations, and included the period you were interested in (1985 onwards). These simulations were done in the early 2000s. They were not done prior to the observations.


3: The 2005 forecasts were to test the decadal predictions.

Correct :-) NB. Here you/we are using 'prediction' and 'forecast' interchangeably.


4: There have been no published predictions for periods longer than 10 years.

We try not to use the term 'predictions' for longer periods, as it becomes contingent on emissions scenarios so by calling (say) a 50-year simulation a "prediction" this is implying a preciction of human choices on global emissions.

We have done projections longer than 10 years, for a number of different emissions scenarios (and trying not to be judgemental about which emissions scenario will be followed in reality).


5: In 2011 the central prediction for temperature anomaly was ~0.6 degrees.

No, sorry, I think I have confused you here. The 0.6C warming was an estimate of warming by the year 2000 that was published by the Met Office in 1972. In those days they didn't get bogged down in the finer points of forecast / prediction / projection, but whatever you call it, it was an estimate of a change for the future that can be compared with what actually occurred (which was a 0.5 C warming from early 1970s-2000)

Or have I misunderstood? Are you talking about a different 0.6 here?


6: In 2011 the measured temperature anomaly was ~0.35 degrees.

Correct but see my response to (5) regarding the relevance of this.


7: The discrepancy between the measured anomaly and the central prediction after 6 years was ~70%.

Please see my point about confusion in point (5) when comparing forecast and observed changes.

But that aside, I should point out that it doesn't make sense to compare anomalies in terms of percentages because the baseline is completely arbitrary. If we used a baseline of 1971-2000 (ie: warmer than 1961-1990) the percentage difference between forecast anomaly and observed anomaly would look smaller even though the actual difference in the two changes would be the same.



8: To counter the discrepancy, the model was reset after 6 years and new predictions made.

Given the above confusion I don't think there is a discrepancy. However, a new decadal forecast was done starting in 2009, using observed starting conditions for that year (since clearly those observations were not available in 2005).


9: The new predictions have a central value of temperature anomaly of ~0.8 degrees by 2020.

Correct :-)


10: The current temperature anomaly is ~0.3 degrees, lower than it was in 2011.

We don't know the global 2012 anomaly yet - it is only May, so don't even have half the year to go on!


11: The Met Office central prediction for the current temperature anomaly is ~0.4 degrees.

2012 is expected to be around 0.48 °C warmer than the long-term (1961-1990) global average of 14.0 °C, with a predicted likely range of between 0.34 °C and 0.62 °C


12: After less than a year of the prediction period the central temperature prediction anomaly is therefore higher than the measured value by about 30%.

Well, firstly it's too early to say how 2012 will pan out, and secondly, as I said under point (7) it doesn't make sense to compare anomalies in terms of percentages.

how much confidence do you have in the MO's longer term predictions?

Fairly high confidence, given that the uncertainty ranges are large!

I think the projections (including their published uncertainty ranges) represent what current understanding of the climate system implies for the consequences of ongoing emissions of greenhouse gases and aerosols, modulated by changes in the physical properties of the land surface.

Many of your points above are irrelevant to longer-term predictions / projections, as they concern much shorter-timescale processes which are dominated by natural variability as opposed to ongoing GHG increases.

The fact that estimates of warming made in the 1970s were reasonably close to what actually happened is fairly compelling.

Also the fact that when we use current models to simulate the climate change of recent decades, the models reproduce past changes reasonably well, also gives me confidence. NB in such studies the models are not not tuned to get the past rate of warming correct!

The uncertainties in future warming are still large though, as we have not had long enough or large enough climate change in the past to constrain the models very well. Several models which all agree reasonably well with past observations give different rates of warming in the future (but all do give further warming).

Cheers

Richard

May 21, 2012 at 4:40 PM | Registered CommenterRichard Betts

May 21, 2012 at 4:12 PM | Simon Anthony

Yes, you've misunderstood Simon..both Richard and I are in agreement that the models aren't tuned to get the right answer which is what Richard said (and how he interpreted your question). I interpreted it in a similar way and stated that in my understanding the parameterization of the model is based on independently derived physics. Of course some of the physics is likely to have improved during the last 30 years (!) and no doubt the best parameterization available would be used in the modelling.

The difference between goat entrails and science is pretty obvious I would have thought! All science is based on the expectation that the natural world makes sense according to physical laws. Since enhanced greenhouse gas levels is resulting in a radiative imbalance at the top of the atmosphere, the world will continue to warm into the near future unless some very unpredicable contingent phenomena accrue (weird solar events; massive volcanic activity). As with all of the periods of statis or falling temperature in the surface record of the last ~ 40 years, the greenhouse signal continues to rise from the natural variability that manifests strongly on decadal timescales.

One would have to present some pretty weird explanations to counter the expectations from science...did you have something in mind!? :-)

May 21, 2012 at 4:44 PM | Unregistered Commenterchris

chris,

"....the greenhouse signal continues to rise from the natural variability that manifests strongly on decadal timescales."

An alternative (and to me much more likely) explanation is that we are mid-way between a LIA and a MWP climate, and that temperatures are exactly where we would expect them to be.

As Richard can not, or will not, answer my question @ 3.11pm, would you like to have a go?

May 21, 2012 at 5:28 PM | Unregistered CommenterRoger Longstaff

May 21, 2012 at 4:40 PM | Richard Betts

Very interesting; thanks for taking the time to make such comprehensive answers. I'll also number my points arising as that made it easier to follow:

1 You say "Multi-decadal projections of the long-term trend have been done since the mid-1990s."

""Step-change" studies of doubled CO2 have been done since the 1960s."

"estimates of warming made in the 1970s were reasonably close to what actually happened"

Are these estimates collected together somewhere so that one can see a comprehensive history of modelled attempts to predict/forecast (whichever you'd rather) future temperature anomalies? Ideally incorporating models other then the MO's. I expect the answer is no otherwise you'd already have referred me to it but even a partial summary would be useful. In particular, what data of this kind are available from the MO and in what form?

2: With reference to those estimate from the 70s, you also say "We try not to use the term 'predictions' for longer periods, as it becomes contingent on emissions scenarios so by calling (say) a 50-year simulation a "prediction" this is implying a preciction of human choices on global emissions." So those 70s predictions obviously incorporated emissions scenarios and in order to have confidence in the estimates (ie they aren't close just by chance) the emissions scenarios would have to match what's actually happened. Is that the case?

3: On the ~0.6 degree warming predicted for 2011, I was using a chart from the MO's website...http://www.metoffice.gov.uk/research/climate/seasonal-to-decadal/long-range/decadal-fc. That seems to show a 2005 prediction of a temp anomaly of ~0.6 degrees in 2011. So, assuming I've understood that correctly, the predicted temp is 0.25 degrees higher than the measured temp.

Now you're quite right to say that this is respect to an arbitrary baseline but of course I could recast the prediction and measurement in terms of rate-of-change of temp, and then the discrepancies would then be rather more than I suggested.

4: That brings up another question. Why not just predict global temp in degrees K rather than the anomaly, which could then be derived from the temp?

5: I think your joke that your confidence in the MO's predictions is fairly high because of the large uncertainty ranges is splendid. The greater the uncertainty => the greater the confidence in the prediction. Of course it also follows that the greater the confidence, the less use is the prediction. Compared to the everyday use of "confidence", this seems an abuse of the term, particularly if "policy-makers" take the description of the MO's high confidence in its predictions at face-value.

6: You say "we use current models to simulate the climate change of recent decades, the models reproduce past changes reasonably well, also gives me confidence. NB in such studies the models are not not tuned to get the past rate of warming correct!"

I find this confusing. Presumably current models use information from previous years' measurements. While they may not be specifically tuned to get the rate of warming correct, they may be tuned in indirect ways which have just the same result. I'm very uncomfortable with any test of predictions which uses any measured information whatsoever from the period to be predicted. The only "clean" test is surely to use no knowledge whatsoever from the predicted period. Hence my interest in the apparent discrepancies in the anomalies (or rates of change) of temp when such "clean" predictions have been made.

7: You also say:

"The uncertainties in future warming are still large though, as we have not had long enough or large enough climate change in the past to constrain the models very well. Several models which all agree reasonably well with past observations give different rates of warming in the future (but all do give further warming)."

I appreciate your honesty but when "Several models which all agree reasonably well with past observations give different rates of warming in the future" and those models can differ in quite radical ways, I'd be quite concerned. Presumably, if they were tuned so that their future predictions matched, their post-dictions of the past would disagree with one another and with measured data. I really don't understand why such odd behaviour would inspire much confidence in any of the models.

Anyway, I'm enjoying this discussion and learning a lot so I hope we can continue.

May 21, 2012 at 5:39 PM | Unregistered CommenterSimon Anthony

I'm sure modelling is fun and educational. I'm sure the Met is doing good work on modelling. They may even be inmproving their models and methods so that one day if I buy them a big enough computer they will be able to do good forecasts over any reasonable term.

But what makes those models a suitable basis on which to make massively important decisions over the future of civilization? Well, I'm sure Richard Betts would say, nothing makes them that, they are just models. But in fact they are being used for just that purpose, and scientists cannot stand back and wash their hands of it.

It is academic otherwise whether a chaotic system can be successfully modelled in this way. I'd say no, not in detail but just possibly in broadbrush. But those who claim it is possible or that it is in fact being done really ought to be asked to prove it a ittle more rigorously than they seem to be right now.

May 21, 2012 at 7:05 PM | Unregistered CommenterRhoda

May 21, 2012 at 4:44 PM | chris

You say

"in my understanding the parameterization of the model is based on independently derived physics. Of course some of the physics is likely to have improved during the last 30 years (!) and no doubt the best parameterization available would be used in the modelling."

Now if it's entirely certain that the improved parameterization was done completely independently of any of the measured data to which climate models were subsequently compared, I'd agree that this was a sound technique. Are you certain that that's the case? Because if it's not, the required answers have in some likely unquantified way, influenced the models' predictions.

"The difference between goat entrails and science is pretty obvious I would have thought! "

Yes, it was a joke.

"the greenhouse signal continues to rise from the natural variability that manifests strongly on decadal timescales."

This rather begs the question: this debate isn't about the existence of the "greenhouse signal" but (at least as far as I'm concerned), about its magnitude and how reliably it can be predicted by models.

May 21, 2012 at 7:20 PM | Unregistered CommenterSimon Anthony

Hi Richard, many thanks for your contributions

I am beginnig to understand your comments re:-

"It wasn't until 2005 that the first "initialised forecast" simulations were done, which is when observational data are used to initialise the forecasts at a very specific year - ie: the model gets kicked-off at the right place in (for example) the ENSO cycle. This is what we use for decadal forecasting."

And it explains why we were at cross purposes re forecasts only starting in 2005. However we still need to know how the forecasts made prior to 2005 performed or we will not be able assess if the "initialised forecast" simulations are a great success?

As I understand it the UKMO GCM was one of the forecasting systems that produced numbers deemed to be so significant that they resulted in the formation of the IPCC in 1988 and the production of FAR in 1990?

So I do not understand how it can be said that forecasts from 1980's do not exist? Yes, they may not be "initialised forecast" simulations but forecasts were made by the MO and would have been, (and this just my speculation) updated on at least an annual basis to fit with ongoing developments/conferences.

All I am trying to get at is a metric by which to measure "our" performance against a timeline. Surely this exists?

May 21, 2012 at 7:43 PM | Registered CommenterGreen Sand

Hi Simon

Thanks for your questions and comments. Yes, a useful discussion!

Some very quick answers:

1. For a history, try AR4 WG1 chapter 1 and for data we make it available via the IPCC Data Distribution Centre amongst other places.

2. Yes Sawyer (1972) made an assumption about future CO2 rise - I think "emissions scenario" would be affording it too much sophistication, it was essentially based on the rate of rise at the time, but read his paper for more details.

3. The decadal forecast figure you show gives a central estimate of the anomaly for 2011 being about 0.55C (white line), and the observed value (black lines) was just about within the 10-90% confidence interval (red plume) so consistent with natural internal variability.

4. Because the actual temperature is not particularly interesting or meaningful, it's the difference relative to what's been experienced in the past that is of interest.

5. Yes, tricky isn't it? As I've said before here, in my view it is all about how we respond to the risk in the face of uncertainty. We are confident that perturbing the energy balance of the planet in the way that we are will warm it, but we are uncertain about how much or what the impacts will be. So it's all down to how much risk we are prepared to accept.

6. Again, difficult isn't it? With a limited time period for which data are available, there's no real alternative to doing what we do at the moment (although suggestions welcome!) :-)

7. Again, it's the best that can be done, and as I said, my confidence is not in any particular model or projection, but in the fact that we have expressed the range of possible future changes as well as is currently possible.

Cheers

Richard

May 21, 2012 at 7:54 PM | Registered CommenterRichard Betts

Hi Green Sand

In the 1980s the standard practice was still to do doubled-CO2 experiments, as started in the 60s by Manabe and Wetherald. These were used as the basis for interpolating to get estimated rates of warming so it is possible to roughly do what you suggest.

Sawyer (1972) used such a method to estimate a rate of warming of 0.2C per decade for the next 30 years. The actual rate of warming in the HadCRUT4 dataset was about 0.17C over roughly that period (so Sawyer's estimate was not too bad IMHO!)

IPCC First Assessment Report (1990) used the same kind of technique to estimate warming of 0.2C to 0.5C per decade, which was overdoing it compared to the observed warming. In the Second Assessment Report (1995), IPCC had started using "transient" simulations, so a more realistic approach which resulted in lower rates of warming - too low in comparison with the observed warming in the 1990s but that was just one decade. The Third Assessment Report used similar techniques. Then later we get to the advent of initialised forecasting in 2005.

This figure from AR4 may be of interest for the projected / simulated rates of warming for the IPCC FAR, SAR and TAR compared to observations over 1990-2005, and you can use that link to read the IPCC chapter that I tried to link to in my response to Simon's 1st point above, but the link got corrupted.

Cheers

Richard

May 21, 2012 at 11:20 PM | Registered CommenterRichard Betts

May 21, 2012 at 7:54 PM | Richard Betts

Thanks for your latest responses. As you said, they were quick, so perhaps that accounts for some oddities.

1 & 2: I'll look at the papers you mention and see what I can make of them.

3: "The decadal forecast figure you show gives a central estimate of the anomaly for 2011 being about 0.55C (white line), and the observed value (black lines) was just about within the 10-90% confidence interval (red plume) so consistent with natural internal variability."

I thought you were joking when you said your confidence in the MO's predictions is fairly high because of the large uncertainty ranges. But here it seems, rather than accepting that the prediction was poor, you use the large uncertainty to defend it.

What you say may be correct but had the forecast been worse (in the everyday sense of being more uncertain) the observed value would have been within the (say) 50% confidence band and hence the prediction would have been "better" than merely being within the 90% confidence band.

So the weaker and less precise the models' predictions, the more chance they have of being "right". Do you really want to defend them on such grounds?

4: "Because the actual temperature is not particularly interesting or meaningful, it's the difference relative to what's been experienced in the past that is of interest."

Again, I'm not sure whether you're joking here: "The actual temperature is not particularly interesting or meaningful"? Getting the "actual temperature" right seems to me the most basic test of whether a model correctly predicts temperatures: if the model can't get the actual temperature right to reasonable accuracy (a zeroth order prediction), why should one have any confidence in higher order calculations of temperature changes?

Your answer seems so defensive that I wonder whether the "actual temperature" calculated by models (and again, I would find it very hard to believe that the calculation hadn't been attempted - if it hasn't been, then it seems a wilful avoidance of a basic test) is significantly different (by which I mean several degrees) from the observed value. So, even if you think the answer uninteresting or meaningless, what do models predict the global temperature to be?

5, 6 & 7: "my confidence is not in any particular model or projection, but in the fact that we have expressed the range of possible future changes as well as is currently possible."

I don't mean to be critical but this seems disingenuous. I don't think I or anyone else (at least on this thread) has suggested that the modellers aren't doing the best they can. Our concern is whether that best is anywhere near good enough to argue for the consequent changes in society advocated by some. From what (I think) we've so far established, the predictive capacity of the models (in the sense of a genuine prediction, made on a particular day, using only data from before that date and forecasting temperatures after that date) hasn't been properly assessed.

When I started looking at the models I expected to see them validated by showing, say, the rms error of the prediction vs observation as a function of time. I was surprised to find that modellers seem to think that it's OK to use intra- or inter-model variation as a substitute. This isn't true as all that such a comparison can demonstrate is how well a model models a model. And even then their predictions diverge widely.

May 21, 2012 at 11:52 PM | Unregistered CommenterSimon Anthony

To me, it is axiomatic that it is impossible to model a non-linear, chaotic and multivariate system, defined by a large number of variables and in which the dependency of some of the variables is not perfectly understood. No useful information can be generated by such a model. I would be grateful if anybody could identify the error in, what seems to me to be, this expression of pure logic.

I believe that there are at least two fatal flaws in the methodology of the construction and operation of numerical climate models:

A. The Use of Filtering

Filters are a well understood technique in improving the signal to noise ratio (S/N) in a detector (eg. in narrowband passive sonar detection). I have seen the use of low pass and Kalman filters referred to in papers on climate modelling. However, while such filters increase the S/N of a detection system, they automatically exclude whole domains of the theoretical solution space - in other words - you can ONLY see what you expect to see. As we are told that climate model experiments are set up to detect the effects of GHG concentrations that is all they could ever hope to detect, and all other consequences or outcomes are automatically excluded.

B. Model Stability

It is clear that climate models can produce climate trajectories that rapidly diverge from reality (eg. the "April drought" that was forecast in March). As climate models propagate their scalar and vector fields forwards in time errors are cumulative, and it is now clear that they start to produce impossible physical, dynamical or thermodynamical states (eg. violation of conservation of mass). Any procedure, such as periodically re-setting model states to within arbitrarily defined boundary conditions, clearly completely invalidates the fidelity of any information generated.

There may well be other areas in which these models are failing. I would be interested to see what others think.

May 22, 2012 at 8:37 AM | Unregistered CommenterRoger Longstaff

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>