Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Von Storch on the pause | Main | The Economist continues to waver »
Friday
Jun212013

Brown out

Robert G. Brown (rgbatduke) has posted another devastating comment at WUWT, which I am again taking the liberty of reproducing in full here. For the counter-view, see Matt Briggs here.

Sorry, I missed the reposting of my comment. First of all, let me apologize for the typos and so on. Second, to address Nick Stokes in particular (again) and put it on the record in this discussion as well, the AR4 Summary for Policy Makers does exactly what I discuss above. Figure 1.4 in the unpublished AR5 appears poised to do exactly the same thing once again, turn an average of ensemble results, and standard deviations of the ensemble average into explicit predictions for policy makers regarding probable ranges of warming under various emission scenarios.

This is not a matter of discussion about whether it is Monckton who is at fault for computing an R-value or p-value from the mish-mosh of climate results and comparing the result to the actual climate — this is, actually, wrong and yes, it is wrong for the same reasons I discuss above, because there is no reason to think that the central limit theorem and by inheritance the error function or other normal-derived estimates of probability will have the slightest relevance to any of the climate models, let alone all of them together. One can at best take any given GCM run and compare it to the actual data, or take an ensemble of Monte Carlo inputs and develop many runs and look at the spread of results and compare THAT to the actual data.

In the latter case one is already stuck making a Bayesian analysis of the model results compared to the observational data (PER model, not collectively) because when one determines e.g. the permitted range of random variation of any given input one is basically inserting a Bayesian prior (the probability distribution of the variations) on TOP of the rest of the statistical analysis. Indeed, there are many Bayesian priors underlying the physics, the implementation, the approximations in the physics, the initial conditions, the values of the input parameters. Without wishing to address whether or not this sort of Bayesian analysis is the rule rather than the exception in climate science, one can derive a simple inequality that suggests that the uncertainty in each Bayesian prior on average increases the uncertainty in the predictions of the underlying model. I don’t want to say proves because the climate is nonlinear and chaotic, and chaotic systems can be surprising, but the intuitive order of things is that if the inputs are less certain and the outputs depend nontrivially on the inputs, so are the outputs less certain.

I will also note that one of the beauties of Bayes’ theorem is that one can actually start from an arbitrary (and incorrect) prior and by using incoming data correct the prior to improve the quality of the predictions of any given model with the actual data. A classic example of this is Polya’s Urn, determining the unbiased probability of drawing a red ball from an urn containing red and green balls (with replacement and shuffling of the urn between trials). Initially, we might use maximum entropy and use a prior of 50-50 — equal probability of drawing red or green balls. Or we might think to ourselves that the preparer of the urn is sneaky and likely to have filled the urn only with green balls and start with a prior estimate of zero. After one draws a single ball from the urn, however, we now have additional information — the prior plus the knowledge that we’ve drawn a (say) red ball. This instantly increases our estimate of the probability of getting red balls from a prior of 0, and actually very slightly increases the probability of getting a red ball from 0.5 as well. The more trials you make (with replacement) the better your successive approximations of the probability are regardless of where you begin with your priors. Certain priors will, of course, do a lot better than others!

I therefore repeat to Nick the question I made on other threads. Is the near-neutral variation in global temperature for at least 1/8 of a century (since 2000, to avoid the issue of 13, 15, or 17 years of “no significant warming” given the 1997/1999 El Nino/La Nina one-two punch since we have no real idea of what “signficant” means given observed natural variability in the global climate record that is almost indistinguishable from the variability of the last 50 years) strong evidence for warming of 2.5 C by the end of the century? Is it even weak evidence for? Or is it in fact evidence that ought to at least some extent decrease our degree of belief in aggressive warming over the rest of the century, just as drawing red balls from the urn ought to cause us to alter our prior beliefs about the probable fraction of red balls in Polya’s urn, completely independent of the priors used as the basis of the belief?

In the end, though, the reason I posted the original comment on Monckton’s list is that everybody commits this statistical sin when working with the GCMs. They have to. The only way to convince anyone that the GCMs might be correct in their egregious predictions of catastrophic warming is by establishing that the current flat spell is somehow within their permitted/expected range of variation. So no matter how the spaghetti of GCM predictions is computed and presented — and in figure 11.33b — not 11.33a — they are presented as an opaque range, BTW, — presenting their collective variance in any way whatsoever is an obvious visual sham, one intended to show that the lower edge of that variance barely contains the actual observational data.

Personally, I would consider that evidence that, collectively or singly, the models are not terribly good and should not be taken seriously because I think that reality is probably following the most likely dynamical evolution, not the least likely, and so I judge the models on the basis of reality and not the other way around. But whether or not one wishes to accept that argument, two very simple conclusions one has little choice but to accept are that using statistics correctly is better than using it incorrectly, and that the only correct way to statistically analyze and compare the predictions of the GCMs one at a time to nature is to use Bayesian analysis, because we lack an ensemble of identical worlds.

I make this point to put the writers of the Summary for Policy Makers for AR5 that if they repeat the egregious error made in AR4 and make any claims whatsoever for the predictive power of the spaghetti snarl of GCM computations, if they use the terms “mean and standard deviation” of an ensemble of GCM predictions, if they attempt to transform those terms into some sort of statement of probability of various future outcomes for the climate based on the collective behavior of the GCMs, there will be hell to pay, because GCM results are not iid samples drawn from a fixed distribution, thereby fail to satisfy the elementary axioms of statistics and render both mean behavior and standard deviation of mean behavior over the “space” of perturbations of model types and input data utterly meaningless as far as having any sort of theory-supported predictive force in the real world. Literally meaningless. Without meaning.

The probability ranges published in AR4′s summary for policy makers are utterly indefensible by means of the correct application of statistics to the output from the GCMs collectively or singly. When one assigns a probability such as “67%” to some outcome, in science one had better be able to defend that assignment from the correct application of axiomatic statistics right down to the number itself. Otherwise, one is indeed making a Ouija board prediction, which as Greg pointed out on the original thread, is an example deliberately chosen because we all know how Ouija boards work! They spell out whatever the sneakiest, strongest person playing the game wants them to spell.

If any of the individuals who helped to actually write this summary would like to come forward and explain in detail how they derived the probability ranges that make it so easy for the policy makers to understand how likely to certain it is that we are en route to catastrophe, they should feel free to do so. And if they in fact did form the mean of many GCM predictions as if GCMs are some sort of random variate, form the standard deviation of the GCM predictions around the mean, and then determine the probability ranges on the basis of the central limit theorem and standard error function of the normal distribution (as it is almost certain they did, from the figure caption and following text) then they should be ashamed of themselves and indeed, should go back to school and perhaps even take a course or two in statistics before writing a summary for policy makers that presents information influencing the spending of hundreds of billions of dollars based on statistical nonsense.

And for the sake of all of us who have to pay for those sins in the form of misdirected resources, please, please do not repeat the mistake in AR5. Stop using phrases like “67% likely” or “95% certain” in reference to GCM predictions unless you can back them up within the confines of properly done statistical analysis and mere common wisdom in the field of predictive modeling — a field where I am moderately expert — where if anybody, ever claims that a predictive model of a chaotic nonlinear stochastic system with strong feedbacks is 95% certain to do anything I will indeed bitch slap them the minute they reach for my wallet as a consequence.

Predictive modeling is difficult. Using the normal distribution in predictive modeling of complex multivariate system is (as Taleb points out at great length in The Black Swan) easy but dumb. Using it in predictive modeling of the most complex system of nominally deterministic equations — a double set of coupled Navier Stokes equations with imperfectly known parameters on a rotating inhomogeneous ball in an erratic orbit around a variable star with an almost complete lack of predictive skill in any of the inputs (say, the probable state of the sun in fifteen years), let alone the output — is beyond dumb. Dumber than dumb. Dumb cubed. The exponential of dumb. The phase space filling exponential growth of probable error to the physically permitted boundaries dumb.

In my opinion — as admittedly at best a well-educated climate hobbyist, not as a climate professional, so weight that opinion as you will — we do not know how to construct a predictive climate model, and will never succeed in doing so as long as we focus on trying to explain “anomalies” instead of the gross nonlinear behavior of the climate on geological timescales. An example I recently gave for this is understanding the tides. Tidal “forces” can easily be understood and derived as the pseudoforces that arise in an accelerating frame of reference relative to Newton’s Law of Gravitation. Given the latter, one can very simply compute the actual gravitational force on an object at an actual distance from (say) the moon, compare it to the actual mass times the acceleration of the object as it moves at rest relative to the center of mass of the Earth (accelerating relative to the moon) and compute the change in e.g. the normal force that makes up the difference and hence the change in apparent weight. The result is a pseudoforce that varies like (R_e/R_lo)^3 (compared to the force of gravity that varies like 1/R_lo^2 , R_e radius of the earth, R_lo radius of the lunar orbit). This is a good enough explanation that first year college physics students can, with the binomial expansion, both compute the lunar tidal force and compute the nonlinear tidal force stressing e.g. a solid bar falling into a neutron star if they are a first year physics major.

It is not possible to come up with a meaningful heuristic for the tides lacking a knowledge of both Newton’s Law of Gravitation and Newton’s Second Law. One can make tide tables, sure, but one cannot tell how the tables would CHANGE if the moon was closer, and one couldn’t begin to compute e.g. Roche’s Limit or tidal forces outside of the narrow Taylor series expansion regime where e.g. R_e/R_lo << 1. And then there is the sun and solar tides making even the construction of an heuristic tide table an art form.

The reason we cannot make sense of it is that the actual interaction and acceleration are nonlinear functions of multiple coordinates. Note well, simple and nonlinear, and we are still a long way from solving anything like an actual equation of motion for the sloshing of oceans or the atmosphere due to tidal pseudoforces even though the pseudoforces themselves are comparatively simple in the expansion regime. This is still way simpler than any climate problem.

Trying to explain the nonlinear climate by linearizing around some set of imagined “natural values” of input parameters and then attempting to predict an anomaly is just like trying to compute the tides without being able to compute the actual orbit due to gravitation first. It is building a Ptolemaic theory of tidal epicycles instead of observing the sky first, determining Kepler’s Laws from the data second, and discovering the laws of motion and gravitation that explain the data third, finding that they explain more observations than the original data (e.g. cometary orbits) fourth, and then deriving the correct theory of the tidal pseudoforces as a direct consequence of the working theory and observing agreement there fifth.

In this process we are still at the stage of Tycho Brahe and Johannes Kepler, patiently accumulating reliable, precise observational data and trying to organize it into crude rules. We are only decades into it — we have accurate knowledge of the Ocean (70% of the Earth’s surface) that is at most decades long, and the reliable satellite record is little longer. Before that we have a handful of decades of spotty observation — before World War II there was little appreciation of global weather at all and little means of observing it — and at most a century or so of thermometric data at all, of indifferent quality and precision and sampling only an increasingly small fraction of the Earth’s surface. Before that, everything is known at best by proxies — which isn’t to say that there is not knowledge there but the error bars jump profoundly, as the proxies don’t do very well at predicting the current temperature outside of any narrow fit range because most of the proxies are multivariate and hence easily confounded or merely blurred out by the passage of time. They are Pre-Ptolemaic data — enough to see that the planets are wandering with respect to the fixed stars, and perhaps even enough to discern epicyclic patterns, but not enough to build a proper predictive model and certainly not enough to discern the underlying true dynamics.

I assert — as a modest proposal indeed — that we do not know enough to build a good, working climate model. We will not know enough until we can build a working climate model that predicts the past — explains in some detail the last 2000 years of proxy derived data, including the Little Ice Age and Dalton Minimum, the Roman and Medieval warm periods, and all of the other significant decadal and century scale variations in the climate clearly visible in the proxies. Such a theory would constitute the moral equivalent of Newton’s Law of Gravitation — sufficient to predict gross motion and even secondary gross phenomena like the tides, although difficult to use to compute a tide table from first principles. Once we can predict and understand the gross motion of the climate, perhaps we can discern and measure the actual “warming signal”, if any, from CO_2. In the meantime, as the GCMs continue their extensive divergence from observation, they make it difficult to take their predictions seriously enough to condemn a substantial fraction of the world’s population to a life of continuing poverty on their unsupported basis.

Let me make this perfectly clear. WHO has been publishing absurdities such as the “number of people killed every year by global warming” (subject to a dizzying tower of Bayesian priors I will not attempt to deconstruct but that render the number utterly meaningless). We can easily add to this number the number of people a year who have died whose lives would have been saved if some of the half-trillion or so dollars spent to ameliorate a predicted disaster in 2100 had instead been spent to raise them up from poverty and build a truly global civilization.

Does anyone doubt that the ratio of the latter to the former — even granting the accuracy of the former — is at least a thousand to one? Think of what a billion dollars would do in the hands of Unicef, or Care. Think of the schools, the power plants, the business another billion dollars would pay for in India, in central Africa. Go ahead, think about spending 498 more billions of dollars to improve the lives of the world’s poorest people, to build up its weakest economies. Think of the difference not spending money building inefficient energy resources in Europe would have made in the European economy — more than enough to have completely prevented the fiscal crisis that almost brought down the Euro and might yet do so.

That is why presenting numbers like “67% likely” on the basis of gaussian estimates of the variance of averaged GCM numbers as if it has some defensible predictive force to those who are utterly incapable of knowing better is not just incompetently dumb, it is at best incompetently dumb. The nicest interpretation of it is incompetence. The harshest is criminal malfeasance — deliberately misleading the entire world in such a way that millions have died unnecessarily, whole economies have been driven to the wall, and worldwide suffering is vastly greater than it might have been if we had spent the last twenty years building global civilization instead of trying to tear it down!

Even if the predictions of catastrophe in 2100 are true — and so far there is little reason to think that they will be based on observation as opposed to extrapolation of models that rather appear to be failing — it is still not clear that we shouldn’t have opted for civilization building first as the lesser of the two evils.

I will conclude with my last standard “challenge” for the warmists, those who continue to firmly believe in an oncoming disaster in spite of no particular discernible warming (at anything like a “catastrophic” rate” for somewhere between 13 and 17 years), in spite of an utterly insignificant rate of SLR, in spite of the growing divergence between the models and reality. If you truly wish to save civilization, and truly believe that carbon burning might bring it down, then campaign for nuclear power instead of solar or wind power. Nuclear power would replace carbon burning now, and do so in such a way that the all-important electrical supply is secure and reliable. Campaign for research at levels not seen since the development of the nuclear bomb into thorium burning fission plants, as the US has a thorium supply in North Carolina alone that would supply its total energy needs for a period longer than the Holocene, and so does India and China — collectively a huge chunk of the world’s population right there (and thorium is minded with rare earth metals needed in batteries, high efficiency electrical motors, and more, reducing prices of all of these key metals in the world marketplace). Stop advocating the subsidy of alternative energy sources where those sources cannot pay for themselves. Stop opposing the burning of carbon for fuel while it is needed to sustain civilization, and recognize that if the world economy crashes, if civilization falls, it will be a disaster that easily rivals the worst of your fears from a warmer climate.

Otherwise, while “deniers” might have the blood of future innocents on their hands if your future beliefs turn out to be correct, you’ll continue to have the blood of many avoidable deaths in the present on your own.

[Updated to add link to Briggs.]

 

PrintView Printer Friendly Version

Reader Comments (79)

I wonder

Jun 21, 2013 at 8:40 AM | Unregistered CommenterBrute

I might frame this guy's postings :-)

It's past time the statistical societies took a firm hand with climate science.

Jun 21, 2013 at 8:51 AM | Unregistered CommenterTinyCO2

There are at least two misunderstandings here. They both depend on identifying what scientists believe their duty to be.

In the case of AR4 and 5, they've been tasked to _present_ the science to policymakers. This means whenever there's a choice to be made between presentation and science, presentation (oversimplification) wins.

As per the modellers, Gavin has been explicit for years: his job is to use the data to improve the models. Such an improvement is the only thing that matters, not actually being already able to reproduce a reality where solar changes or a volcano can ruin all forecasts at any time.

In fact EVERY so-called climate satellite has been launched so far in orbits that make it impossible to use it for climate (multidecadal) observations. They're mostly good at and publicised to be good at...helping to improve the models.

FWIW I think our focus wrt ar5 should be to make all of these truisms known, so that choices can be made in an informed way, rather than waste our time in futile self-righteous indignation.

Jun 21, 2013 at 8:57 AM | Registered Commenteromnologos

WMBriggs' update is rather more telling than his original counter argument:

""Update Although it is true ensemble forecasting makes sense, I do NOT claim that they do well in practice for climate models. I also dispute the notion that we have to act before we are able to verify the models. That’s nuts. If that logic held, then we would have to act on any bizarre notion that took our fancy as long as we perceived it might be a big enough threat.

Come to think of it, that’s how politicians gain power.""

On the face of it that does seem to say that Briggs actually agrees with rgbatduke who is talking specifically about climate models.

Jun 21, 2013 at 8:59 AM | Unregistered CommenterDisko Troop

Fantastic stuff. If nothing else it needs physically publishing in a letters to the editor section of a paper or journal. So we can point to it later and say they were told.

Jun 21, 2013 at 9:08 AM | Unregistered CommenterDuncan

Dr Brown is right - dead right and Briggs is wrong - dead wrong. Dr Brown is arguing from the technical, knowledge based stand point that the model are wrong no matter how to cut them. Briggs seems to be supporting the old même of precautionary principle and that is wrong when civilisation is the target, always

Jun 21, 2013 at 9:12 AM | Unregistered CommenterStephen Richards

"In ancient times they had no statistics so they had to fall back on lies". - Stephen Leacock

Jun 21, 2013 at 9:24 AM | Unregistered CommenterBrianSJ

@Disko

I don't think they do agree. Given my understanding (which is tenuous at best...) rgbatduke is arguing (at least given his example) that averaging or otherwise merging various models that appear over time, each with a "new & improved" physical aspect, makes no sense. In this context I can see his argument - to leave out known information just to generate alternate models is daft.

However, the GCMs do not represent a single, improving over time model. They are 20-30 parallel models, each with their own interpretation of "basic phyics", explicit & implicit assumptions. Averaging them together is, in one sense, just another mathematical step in generating the final model. If I understand Briggs, this is where he's coming from.

That being said, the one thing they both do most emphatically agree on is that a model is only worth anything if it is in some way validated against reality, and we all know how well the GCMs do that. Throw out the bad ones and focus on making the remainder better... though the focus of the above article suggests that this is beyond the meagre efforts of mankind for the forseeable future.

Jun 21, 2013 at 9:34 AM | Unregistered Commenterflintwingel

Anthony has a relevant post of an interview with Hans Von Storch:

http://tinyurl.com/WUWT-Storch-interview

Jun 21, 2013 at 9:43 AM | Unregistered Commenternot banned yet

I can see the logic in RGB argument but not with WMB.

When I was a senior weather forecaster for a private weather company, I used five different models. I most certainly didn't use an average of them, because each of the models had their own strengths and weaknesses based on comparison to their past performance and reality.

One model had better accuracy over 1-3 days; another 5-8 days; and another 9-14 days. Using this as a base, they were checked against a fourth model. Finally having made preliminary decisions for each time periods this was compared to the most reliable surface and upper air models.

Of course when they all told the same story, it made life easier. But it wasn't an average.

Jun 21, 2013 at 9:43 AM | Unregistered CommenterNeilC

"As per the modellers, Gavin has been explicit for years: his job is to use the data to improve the models. Such an improvement is the only thing that matters, not actually being already able to reproduce a reality where solar changes or a volcano can ruin all forecasts at any time."

AFAICT this improvement is imaginary. Does anybody have any metrics which demonstrate historical GCM/ESM improvements?

Jun 21, 2013 at 9:50 AM | Unregistered Commenternot banned yet

Re Monte Carlo, I wonder if climatologists always do poorly at the roulette table because they think that the median value (19) is the one to bet on..?

Jun 21, 2013 at 9:50 AM | Registered Commenterjamesp

Given the significant amount of technical detail combined in both RGB posts, can someone with the expertise to do so post an abstract of them, to keep us simpletons in the loop? Ta in advance.

Jun 21, 2013 at 9:51 AM | Unregistered Commentercheshirered

Briggs has been posting a lot recently, and I suspect this one was rather hastily published. Bayes is not a god, and nor is 'frequentist' analysis without value. Briggs has added this update to his post:

Update Although it is true ensemble forecasting makes sense, I do NOT claim that they do well in practice for climate models. I also dispute the notion that we have to act before we are able to verify the models. That’s nuts. If that logic held, then we would have to act on any bizarre notion that took our fancy as long as we perceived it might be a big enough threat.

Come to think of it, that’s how politicians gain power.

Jun 21, 2013 at 10:06 AM | Registered CommenterJohn Shade

Cheshirered – I understood him… well, okay, I understood his conclusion. Surely, if a numpty such as myself can understand, someone like yourself may be trying to read more into it than is there.

Jamesp – maybe that is where I have been going wrong. (And I could have been so rich!)

Jun 21, 2013 at 10:09 AM | Unregistered CommenterRadical Rodent

cheshirered - consider these:

1. Model A: the Earth is a perfect sphere 12,000km in diameter
2. Model B: the Earth is an oblate spheroid with an average diameter of 12,472km

Without fear of being found mistaken I can state the following:

1. Model B is better than Model A
2. The average of Model A and Model B, however taken, is bound to be worse than Model B

Likewise for climate models. As they are not just random samples of a single "model space", it makes no scientific sense to go by their "average".

HTH

Jun 21, 2013 at 10:33 AM | Registered Commenteromnologos

And for the sake of all of us who have to pay for those sins in the form of misdirected resources, please, please do not repeat the mistake in AR5. Stop using phrases like “67% likely” or “95% certain” in reference to GCM predictions unless you can back them up within the confines of properly done statistical analysis and mere common wisdom in the field of predictive modeling — a field where I am moderately expert — where if anybody, ever claims that a predictive model of a chaotic nonlinear stochastic system with strong feedbacks is 95% certain to do anything I will indeed bitch slap them the minute they reach for my wallet as a consequence.

Kaboom! Not just head hitting nail, but Anti-matter bomb hitting space station....

Ande the last two paragraphs are also exactly spot on!!

Great job!

Jun 21, 2013 at 10:38 AM | Unregistered Commenterwijnand

I think the gist of rgbtduke post is that you can't avegare models which are supposed to be modelling the same underlying physics in the same way you can average multiple runs of the same model.

In a simplistic sense, consider 2 models.

Model A : models temps going up from 15 degrees to 17 degrees over the period 2000-2010
Model B : models temps going down from 15 degrees to 13 degrees over the same period.

If you 'average' these two together, you get an 'ensemble' trend of flat from 2000-2010. If hid happens to match reality, can you claim that the ensemble of models are good?

The answer is no.

Model A was showing an upward trend, so whatever it was using as its underlying physics is wrong - and over time, this wrongness will possibly get bigger and bigger. By 2050 or 2100, this model will be trending far far higher than reality, and it will be obvious that it is useless.

Model B was showing an downward trend, so whatever it was using as its underlying physics is wrong - and over time, this wrongness will possibly get bigger and bigger. By 2050 or 2100, this model will be trending far far lower than reality, and it will be obvious that it is useless.

But both models were attempting to model the same thing. Individually, their methods (formulae, algorithms, priors, call it what you will) were wrong, and should be discarded. The fact that two models got it wrong so badly in opposite directions enough to balance out the ensemble average is mere coincidence, and is an artefact of which models you decide to include in the ensemble.

Imagine in this world that the real temperature trend is up slightly. So out of all the candidate models, you choose another model which resembles model A, call it A1. This model trends upwards over the same time period, perhaps it uses similar priors to A, but a few minor differences.

Now you get the average of two upward trends and a downward one - voila, the ensemble trend is now slightly upwards. Need to tweak it some more to get closer to a match with reality? Add another model, and another.. until the ensemble matches reality.

The choice of which models to include in the ensemble can change that ensemble trend in any way they way.

The fact that the ensemble they do have trends upwards far in excess of reality proves only one thing - that the models in the ensemble all exaggerate warming. They are all wrong in the same direction - exaggeration.

Jun 21, 2013 at 10:39 AM | Unregistered CommenterTheBigYinJames

In the past in order to get a broader consensus did we try to average the Flat Earth and Round Earth models or the Sun/Earth centred solar system models?

Averaging different 'climate models' is nothing like weather forecasting ensembles. Also in the end, at most one model can be correct and all the others are wrong. Why don't they start averaging CO2 and Svensmark models of climate change to get an even larger spread of possibilities.

Jun 21, 2013 at 10:48 AM | Unregistered CommenterRob Burton

@omnologos

it makes no scientific sense to go by their "average".

But we are talking about mathematical constructs here, and the only real test of a mathematical construct is does it's output match reality. The fact that the IPCC seem to believe that the final step in their mathematical construct is averaging together a bunch of other mathematical constructs doesn't really change the validity.

I think the "averaging" step is an acknowledgment that all the models are, to one degree or another, crap and that an average somehow reduces the level of crapiness. Quite how I'm not sure...

What doesn't make "scientific sense" is pretending that, in the absence of proper validation (or worse still the clear and unambigous deviation from reality) that any of the GCMs, alone or averaged together are in any way useful predictors.

Jun 21, 2013 at 10:49 AM | Unregistered Commenterflintwingel

A much simpler example is a teacher wants to see if the pupils in the class are psychc, So she thinks of a number between one and a ten, and tells each child to guess a number.

The children shout out random numbers.

At the end, the teacher takes the average of all the numbers and finds out it is very close to the number she thought of. She concludes the children are psychic, even though no child was anywhere near close to guessing the correct number.

Jun 21, 2013 at 10:51 AM | Unregistered CommenterTheBigYinJames

It seems to be that Briggs is answering an abstract question about whether averaging models is always a bad thing.

His example shows this: if you have several models, all of which you think may provide reasonable results, but they differ, then averaging them is likely to remove the influence of the outliers. He then treats the variance between the models as if it covers the underlying uncertainly. In an abstract mathematic way, this make sense, and that is after all Briggs's field.

But RGB is pointing out that the outliers, when looking a the GCMs, are plain wrong. Letting them have any influence on the final result makes no sense - it means you have actively decided to make the result less accurate.

And worse - assuming that the spread of the models in some way reflects uncertainty is plain wrong. It reflects the incompetance (lack of skill) of the modellers (models).

Once the better models have been selected, the spread of multiple runs of these models alone reflects the uncertainty, assuming that the priors also refect the historical uncertainty.

Jun 21, 2013 at 10:57 AM | Registered Commentersteve ta

I did post this on the RGB thread at WUWT this morning, but that thread is getting old. This was intended in response to Nick Stokes:

What difference does it make that they show the mean or the median + the envelope/variance of the models? What message is being conveyed? The point is that any summary of the models like this is useless.

The IPCC continually tries to pretend that these are just “scenarios” not predictions. If those “scenarios” are so unlikely as to be impossible, then what use are the models? This is not an academic exercise: IPCC reports are used to set public policies costing billions, even trillions of dollars. Do you think the summary for policymakers and the activists who take away those messages see your nuanced hair splitting over whether its the mean, or median or whatever? Those graphs have a message to deliver: the message is the models can predict the climate into the future and the future looks bad and we all need to act now.

As RGB points out, the models , whether plotted as spaghetti, or summarised as a median and envelope or any other pointless statistic you want to agonise over do not agree with reality and therefore should be disregarded. We cannot currently predict the future climate and given its non-linearity and complexity it is unlikely that we are going to be able to for a long time to come. If we cannot model the climate for even a short period with any degree of accuracy then we should stop doing so and admit that we don’t know what the future climate will be . Anything less is negligence and, if intended to mislead, criminal.

What is the difference between a “scenario” and a “prediction”?

Jun 21, 2013 at 10:58 AM | Registered Commenterthinkingscientist

Scenario is the postulated starting point, a projection is the resulting ending point. Multiple scenarios generate multiple projections.

IMO projections are the same as predictions provided they are quoted with their associated generating scenarios.

Jun 21, 2013 at 11:08 AM | Unregistered Commenternot banned yet

I read this yesterday on WUWT, and considered it a masterclass of logical thought. Also, I thought that his new analogy - predicting the tides - was better than that of the structure of the carbon atom, which he used originally. It is more relevant - with Newton's laws of gravitation and motion, and basic empirical data (gained from simple measurements) a theory was derived that allowed the prediction of the tides at any time in the future (and could also hindcast the past). The theory is based upon simple, deterministic equations. In comparison the CAGW modellers stand as much chance of predicting future climate states as King Canute had of turning back the tide.

Jun 21, 2013 at 11:08 AM | Unregistered CommenterRoger Longstaff

This should be a final nail in the coffin of the science-policy wedlock. In that case the (climate science or then pseudo-science) is in trouble. However, politicians will spin their ways out of this mess. Brilliant posting Dr. Brown.

Jun 21, 2013 at 11:12 AM | Unregistered Commenteroebele bruinsma

I like his discussion of tidal forces. For a long time, warmists would counter the inevitable argument 'forecasters can't get next week's weather right, so why should we trust the models' with the counter-argument 'we may not be able to predict next week's weather, but we can predict that June will be warmer than January'. I always found that a very weak argument - as with tides, the seasons are predictable based on (1) long experience and (2) a detailed understanding of Earth's motion round the Sun, axial tilt etc. To apply the same certainty to climate modelling would be to claim both (1) prior experience of anthropogenic CO2 increases and (2) a detailed understanding of the climate system, both of which are outside the boundaries of what even climate scientists would claim for themselves.

Another long-standing warmist claim is that there could be no 'conspiracy of silence' within (climate) science because, if CAGW theories were wrong, scientists would be coming out the woodwork to make their names by disproving CAGW. On recent evidence, this could well be one thing that the warmists got right.

Jun 21, 2013 at 11:24 AM | Unregistered CommenterChris Long

I do not pretend to understand the arguments...but there is a part of me that likes to listen anyway to the sound of two expert statisticians arguing about just how wrong the climate models are....and why.

For me, it is like listening to someone speak Italian.....I cannot understand a word they are saying but it all sounds wonderful.

Jun 21, 2013 at 11:25 AM | Unregistered CommenterJack Savage

Never mind the statistics, it is conceptually crazy to claim that an ensemble of models can produce a meaningful result. Perhaps it is another consensus argument to convince the punters.

Considering that all of the models are wrong to start with, it makes no difference.

Climate science certainly is an oxymoron. Don't these guys ever get embarrassed? Why do they still believe in these models after 17 years of divergence from reality?

Jun 21, 2013 at 11:36 AM | Unregistered CommenterSchrodinger's Cat

Robert G. Brown (rgbatduke) has posted another devastating comment at WUWT, which I am again taking the liberty of reproducing in full here. For the counter-view, see Matt Briggs here.

Bish, does you description of RGB's comment as "devastating" imply that you believe Briggs to be in error?

Jun 21, 2013 at 11:37 AM | Unregistered CommenterRichieRich

I could pose the following scenarios:

The Earth could be struck by an extinction event size meteorite/asteroid within 50 years.
The Earth could be invaded by hostile aliens within 50 years.
The Earth could warm by 4 degC within 50 years.
The Earth could warm by 10 degC within 50 years.
The Earth could cool by 10 degC within 50 years.

All of these scenarios are plausible, even possible (Dansgaard-Oeschger events appear to be able to change Greenland temps by >10 degC in just a few decades). But just how likely are they? Should we act on them?

The IPCC very cleverly refers to its climate projections as "scenarios". That is what they are: if (a) is true then (b) may follow. But they make no attempt to quantify whether (a) actually is true or the likelihood that (b) really will follow. The scenarios they present fall in an incredibly small, restricted part of the possible model space attempting to simulate...cue RGB:

"...the most complex system of nominally deterministic equations — a double set of coupled Navier Stokes equations with imperfectly known parameters on a rotating inhomogeneous ball in an erratic orbit around a variable star with an almost complete lack of predictive skill in any of the inputs (say, the probable state of the sun in fifteen years), let alone the output..."

but because they don't acknowledge how large the model space really is they create a deception. Nick Stokes has been arguing on the RGB thread black is blue and then white over the nuances of averaging, or median or anything else, but all this is irrelevent. What matters is that the policy makers and CAGW activists behave as though the scenarios are actually predictions and likely to happen. The IPCC clients do not understand that a "scenario" is a climate scientist playing "what if" games with a toy model.

The IPCC calls them scenarios deliberately (and arguably correctly) but knows full well their client is going to act on them as though they were reliable predictions but never bothers to tell them that actually the models and the outputs are certainly wrong. The scenarios proposed have vanishingly small probabilities of occurrence: they are fantasy worlds, not real worlds and reality on this Earth has caught up with them: in a nutshell, the climate models are bollocks.

Jun 21, 2013 at 11:41 AM | Registered Commenterthinkingscientist

RGB has missed one important point, though. He seems to think that when the IPCC claim that something is 67% likely then it is based on some sort of faulty statistics by scientists.

My summary would be rather that it is a number plucked from the air and agreed to by a consensus of politicians.

Jun 21, 2013 at 12:06 PM | Unregistered Commentergraphicconception

graphicconception: I think the fallacy is actually that they think it 67% likely within the narrow set of scenarios they have tested. Because the "scenarios" run do not sample a realistic model space (because the physics is clearly incorrect (divergence from reality) and they do not know all the natural factors), the set of models they end up with each have a vanishingly small probability of occurrence. In fact, the passage of time and addition of real world measurements show the "scenarios" to have not just vanishingly small probability, but in fact zero probability of occurrence.

Model...meet real world.

Jun 21, 2013 at 12:13 PM | Registered Commenterthinkingscientist

From an end user of weather forecasting models; I use the models that are proven to be most reliable. [governments take note]

What I see in all the GCMs is nothing close enough with reality, and hence I would bin them. [Governments take note again].

Jun 21, 2013 at 12:33 PM | Unregistered CommenterNeilC

Thanks for the replies. It helps when better qualified opinions articulate it all into an easy summary.

Thinking Scientist's final line summarises it nicely.

Jun 21, 2013 at 12:44 PM | Unregistered Commentercheshirered

Thinking Scientist - sorry but you are missing the point re scenarios vs projections and you are adding to the confusion. In your scientific endeavours have you ever worked in modelling?

Jun 21, 2013 at 12:55 PM | Unregistered Commenternot banned yet

But they make no attempt to quantify whether (a) actually is true or the likelihood that (b) really will follow.
I think this just about hits the nail on the head. The number of outcomes from any given scenario is essentially three — better, worse, or no change (all dependent on how you choose to define the first two). These same outcomes apply to any scenario and there appears to be a range of scenarios, none of which is underpinned by empirical evidence and none of which has any predictive capability.
Climate science, at the moment, has reached an impasse and as Brown rightly points out we are spending billions which would be infinitely better spent on problems that we know exist now rather than on theoretical possibilities that are becoming increasingly less likely by the day.
And that is before we factor in the possibility of global cooling which, I would suggest, is becoming more likely by the day and the effects of which will be considerably more serious than the relatively minuscule warming that is being foreseen.

Jun 21, 2013 at 1:14 PM | Registered CommenterMike Jackson

The whole thing is based on a set of unfounded assumptions.

When a paper contends that foxes will become extinct in 20 years, it's because the research is based on a pack of assumptions, and they know it.

Jun 21, 2013 at 1:24 PM | Unregistered CommentereSmiff

"... where if anybody, ever claims that a predictive model of a chaotic nonlinear stochastic system with strong feedbacks is 95% certain to do anything I will indeed bitch slap them the minute they reach for my wallet as a consequence."

I love this guy!!!!

Jun 21, 2013 at 1:36 PM | Unregistered CommenterO2bnaz2

F. The Projections of the Earth's Future Climate

The tools of climate models are used with future scenarios of forcing agents (e.g., greenhouse gases and aerosols) as input to make a suite of projected future climate changes that illustrates the possibilities that could lie ahead. Section F.1 provides a description of the future scenarios of forcing agents given in the IPCC Special Report on Emission Scenarios (SRES) on which, wherever possible, the future changes presented in this section are based. Sections F.2 to F.9 present the resulting projections of changes to the future climate. Finally, Section F.10 presents the results of future projections based on scenarios of a future where greenhouse gas concentrations are stabilised.

http://www.ipcc.ch/ipccreports/tar/wg1/029.htm

Jun 21, 2013 at 1:39 PM | Unregistered Commenternot banned yet

For me the far more compelling part of Prof Brown's post was the closing section, starting with:

"Let me make this perfectly clear. WHO has been publishing absurdities such as the “number of people killed every year by global warming” (subject to a dizzying tower of Bayesian priors I will not attempt to deconstruct but that render the number utterly meaningless). We can easily add to this number the number of people a year who have died whose lives would have been saved if some of the half-trillion or so dollars spent to ameliorate a predicted disaster in 2100 had instead been spent to raise them up from poverty and build a truly global civilization."

That whole final section should be published across the media. All it needs is a brief intro stating that climate science, especially the models, is nowhere near certain enough to justify the mammoth sacrifices imposed by current policies.

Jun 21, 2013 at 2:06 PM | Unregistered CommenterMikeH

@TheBigYinJames (10:51 AM) "no child was anywhere close to guessing the correct number"

Amazing, since the number chosen was between one and ten. But perhaps the children weren't told that before shouting out their random numbers, in which case their average was unlikely to be anywhere near the teacher's choice.

Perhaps the example needs a little refinement.

Jun 21, 2013 at 2:26 PM | Unregistered Commentersimon abingdon

1 to 1000 then :) You get my point.

Jun 21, 2013 at 2:37 PM | Unregistered CommenterTheBigYinJames

"We can easily add to this number the number of people a year who have died whose lives would have been saved if some of the half-trillion or so dollars spent to ameliorate a predicted disaster in 2100 had instead been spent to raise them up from poverty and build a truly global civilization."

Global warming (expensive energy) increases poverty which is exactly what our American cousins want.

Jun 21, 2013 at 2:44 PM | Unregistered CommentereSmiff

Not sure why you are blaming America, most of this nonsense has always emanated from this side of the pond.

Jun 21, 2013 at 2:55 PM | Unregistered CommenterTheBigYinJames

Further update from Briggs

I weep at the difficulty of explaining things. I’ve seen comments about this post on other sites. A few understand what I said, others—who I suspect want Brown to be right but aren’t bothering to be careful about the matter—did not. Don’t bother denying it. So many people say things like, “I don’t understand Brown, but I’m going to frame his post.” Good grief.

There are two separate matters here. Keep them that way.

ONE Do ensemble forecast make statistical sense? Yes. Yes, they do. Of course they do. There is nothing in the world wrong with them. It does NOT matter whether the object of the forecast is chaotic, complex, physical, emotional, anything. All that gibberish about “random samples of models” or whatever is meaningless. There will be no “b****-slapping” anybody. (And don’t forget ensembles were invented to acknowledge the chaotic nature of the atmosphere, as I said above.)...

Jun 21, 2013 at 3:01 PM | Unregistered CommenterRichieRich

Hansen?

Jun 21, 2013 at 3:01 PM | Unregistered Commenternot banned yet

TheBigYinJames

I am blaming America because they run everything these days.

Jun 21, 2013 at 3:02 PM | Unregistered CommentereSmiff

@TheBigYinJames (2:37 PM)

Now why would the average of the children's numbers be anywhere near the teacher's choice?

Jun 21, 2013 at 3:06 PM | Unregistered Commentersimon abingdon

"most of this nonsense has always emanated from this side of the pond."

Phil Jones, Julia Slingo, Richard Betts, Tamsin Edwards, etc., etc., etc.....

Jun 21, 2013 at 3:09 PM | Unregistered CommenterRoger Longstaff

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>