Buy

Books
Click images for more details

Support

 

Twitter
Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« ‘Landmark consensus study’ is incomplete | Main | Met Office admits claims of significant temperature rise untenable »
Monday
May272013

Met insignificance 

This is an ultrasimplified version of Doug Keenan's post this morning.

The Met Office has consistently said that the temperature rise since 1850 is too large to have been caused by natural causes. Questioning from Lord Donoughue elicited the information that they came to this conclusion by modelling temperatures as a straight-line trend (global warming) plus some noise to represent normal short-term variability.

However, would a model in which temperatures went up and down at random on longer timescales, but without any long-term trend at all, be a better match for the real temperature data? Doug Keenan has come up with just such a "temperature line wiggling up and down at random" model and it is indeed a much better match to the data than the "gradual warming plus a bit of random variation" model used by the Met Office. In fact it is up to a thousand times better.

In essence then, the temperature data looks more like a line wiggling up and down at random than one that has an impetus towards higher temperatures.* That being the case, the rises in temperature over the last two centuries and over the last decades of the twentieth century, look like nothing untoward. The global warming signal has not been detected in the temperature records.

 

*Here I'm only referring to the two models assessed. This is not to say there isn't another model with impetus to higher temperatures which wouldn't be a better match than Doug's model. It's just nobody has put such a )third model forward yet. (H/T JK in the comments)

PrintView Printer Friendly Version

Reader Comments (193)

Is now a good time for that piece you promised us on the evidence for AGW?? ;-)

May 27, 2013 at 2:42 PM | Unregistered Commenternot banned yet

Thanks Andrew. I presume you mean no anthropogenic global warming signal has been detected.

May 27, 2013 at 2:42 PM | Unregistered CommenterPH

@PH

It means no warming signal of any type or cause.

What we observe (0.8 degree increase in 150 years) is explained completely by chance. Under this model, if we could re-run history we might equally expect to observe a fall of 0.8 degrees.

May 27, 2013 at 2:46 PM | Unregistered CommenterGeckko

Excellent summary.

Now if it were possible to show the various lines wiggling about on the same axis, then nice Mr Rose of the Daily Mail would have hardly any work to do to make it publication ready.......And even Loopy Louise at the Torygraph might understand it.....

May 27, 2013 at 3:17 PM | Unregistered CommenterLatimer Alder

To repeat myself from the previous story (where my post ended up on page 2 and nobody a statistically insignificant number go there!):

One reason I ignored the GW story until I saw Al Gore’s film (which scared me – as was its intention – though, in my case, I tried to educate myself about it, not just believe what a possibly biased few told me) was that I wondered what the fuss was about: if it is only within the last 50 years that thermometers have become accurate to within 0.5°C, then that implies that a hundred years ago larger errors would have been acceptable. This means that, if the average thermometer error in 1880 was 0.6°C LOW (i.e. within acceptable margins), and the more modern thermometers have an average error 0.2°C HIGH (still within acceptable margins), then the real temperature increase is ZERO.

It would be interesting to know the statistical probability of that being the case.

May 27, 2013 at 3:35 PM | Unregistered CommenterRadical Rodent

'In essence then, the temperature data looks more like a line wiggling up and down at random than one that has an impetus towards higher temperatures.'

If you believe this then you really are fooling yourself. It would only be true if the only model with 'an impetus towards higher temperatures' was AR(1) plus linear trend.

Re: Latimer Adler's suggestion that Rose and the Telegraph would be interested, are you referring to the fact that Keenan makes nonsense of the claim that global warming has 'stopped' based on a model of linear trend plus noise that they have trumpeted so loud? Or do you only like that model when it gives a convenient sound bite?

May 27, 2013 at 3:41 PM | Unregistered CommenterJK

Unfortunately an argument of statistical modelling is never going to convince the guy on the proverbial Clapham omnibus. Of course MysticMet knows this which is why they continue with the charade.

Unfortunately for the alarmist cause, nature is having the last laugh and the AGW meme is dying a slow death from the cold out there.

May 27, 2013 at 4:10 PM | Unregistered CommenterFarleyR

JK

There's some ambiguity in my final paragraph I agree - does "one" refer to the models previously discussed or to all models? I'll add a footnote.

May 27, 2013 at 4:12 PM | Registered CommenterBishop Hill

Uh, Keenan said "too large" Bishop has written "too long", should fix that...

May 27, 2013 at 4:20 PM | Unregistered CommenterJT

@JK

'Latimer Adler's suggestion that Rose and the Telegraph would be interested, are you referring to the fact that Keenan makes nonsense of the claim that global warming has 'stopped' based on a model of linear trend plus noise that they have trumpeted so loud?'

Correct me if I have misunderstood this, but surely if it cannot be adequately shown that 'global warming' has ever even started, then such a revelation would vastly overshadow any relatively minor quibbles about whether and when it has stopped or not.

If it has really never been detected at all - if the whole sad and shabby story is based on intellectual sand ................then there is much much more at stake than one particular model.

May 27, 2013 at 4:25 PM | Unregistered CommenterLatimer Alder

Latimer

I thought I'd fixed that - Doug pointed it out to me. Thanks.

May 27, 2013 at 4:25 PM | Registered CommenterBishop Hill

It's a good technical summary, but it carries no reference to the MET Office's statement/agreement/corroboration as to the lack of statistical detection of AGW. That needs to be in the summary.

May 27, 2013 at 4:37 PM | Unregistered CommenterCapell

What this result confirms is that the temperature record from 1850 alone, interpreted purely as a time series, without any underlying physical model, does not show significant evidence of a linear increase over this period. But we do have good physical reasons for expecting some temperature increase from AGW, which would not be linear since 1850 but which would be near-linear since 1945 or so. We also have some grounds for doubting the long-term applicability of the ARIMA(3,1,0) statistical model, and we should interpret any question of "detection" within this wider picture.

My personal take on all this is that it has little to do with the question of detection, but it does generally support the view that long term variation has been underestimated, and that a significant fraction of recent warming could very well be natural variation rather than any driven process.

Of course the Met Office has been wildy oversimplifying things in its public presentations, and it is good to get them to admit that. But it's not necessarily wise to make equally bad over simplifications in reply.

What I am looking forward to is the promised conversation between the two Dougs, Keenan and McNeall. That could be genuninely interesting.

May 27, 2013 at 4:40 PM | Registered CommenterJonathan Jones

Jonathan. can you see anything in 1945 to date which shows something happening which cannot be seen in 1850-1945? Are we trying to distinguish between 'nothing much is happening' and 'something is happening, but it amounts to nothing much'? Is there any other evidence that the CO2 increase is having a temperature effect? Actual measurable evidence? Best evidence, as I call it? Because if not we are back with the tooth fairy evidence again. There must be one because the tooth disappeared and the money turned up.

May 27, 2013 at 5:08 PM | Unregistered Commenterrhoda

" would a model in which temperatures went up and down at random on longer timescales, but without any long-term trend at all, be a better match for the real temperature data?"

Hang on - what we OBSERVE is that average temperatures have been cycling between LIA and MWP values for at least 2,000 years. I wish somebody would put some serious effort into explaining the empirical data instead of the nonsense that is going on here.

May 27, 2013 at 5:12 PM | Unregistered CommenterRoger Longstaff

This is one step towards the repeal of the 2008 Climate Change Act.

How long will it take for professional bodies that have enshrined this act into their day to day activities to respond?

How long will the BBC remain silent on this subject?

How soon will this impact the professions and industries that rely on the Met Office for guidance on 'climate change' and to what extent will it impact them, their suppliers and customers?

May 27, 2013 at 5:29 PM | Unregistered CommenterRobert Christopher

JJ: Great summary. With you on all that.

May 27, 2013 at 5:34 PM | Registered CommenterRichard Drake

Rhoda +1

Jonathan - in today's guest post by Doug K there is a link to Doug Neall's email of 12 August 2011 which gives more detail on his views at the time. As far as your comment

"My personal take on all this is that it has little to do with the question of detection, but it does generally support the view that long term variation has been underestimated, and that a significant fraction of recent warming could very well be natural variation rather than any driven process."

goes I'm pretty sure I've seen this elsewhere being taken to imply things are worse than we thought.

In answering Rhoda, please can you give some comment on what you do think bears on the "question of detection". Thank you.

May 27, 2013 at 5:50 PM | Unregistered Commenternot banned yet

nby:

goes I'm pretty sure I've seen this elsewhere being taken to imply things are worse than we thought.

a) this sentence is hard to parse. b) what JJ wrote implied it is almost certainly better than die-hard CAGWers were thinking. Did it not? Or worse for whom?

May 27, 2013 at 6:32 PM | Registered CommenterRichard Drake

Sorry - lack of punctuation.
*******
As far as your comment:

"My personal take on all this is that it has little to do with the question of detection, but it does generally support the view that long term variation has been underestimated, and that a significant fraction of recent warming could very well be natural variation rather than any driven process."

goes, I'm pretty sure I've seen this elsewhere being taken to imply things are worse than we thought.
********
The response I recall is that the unknown natural variation could equally be downward, hence masking the anthropogenic signal. The thinking continues that, when the natural variation tends upward, it will sum with the anthropogenic signal. FWIW - I'm more of the opinion we'll fall off the head of the pin before then.

May 27, 2013 at 7:11 PM | Unregistered Commenternot banned yet

They also say that warming since 1900 is significant, with a comparison of .32 to the drift-less model. This seems to imply that if you take a more recent comparison, the likelihood of the warming being part of a random walk is much less.

What does this comparison number mean, and is there an established point at which one model is better than chance?

May 27, 2013 at 7:24 PM | Unregistered CommenterAndyL

@ Jonathan Jones 4:40 PM

The IPCC, the Met Office (upon which the UK government relies), the U.S. Climate Change Science Program (upon which Congress relies), and others have all claimed that the temperature increase alone is significant. The claim has been widely accepted by skeptics as well. Moreover, for most people alarmed about global warming, the claim seems to be a major part of their basis for alarm. As the post says, the claim has “seriously affected both policies and opinions”—and the claim is false.

Suppose that we have two statistical models of some data set. If the relative likelihood of Model 1 with respect to Model 2 is very small, then that strongly indicates that Model 1 is failing to explain some substantial structural variation in the data—regardless of the scientific plausibility of Model 2—and so Model 1 should be rejected. Thus, the post uses the ARIMA model for rejecting the trending model.

I do not know what statistical model would be most appropriate for global temperatures, and the ARIMA model is not proposed as being such—though it might be, on the time scale of relevance. Some people have told me that they believe the ARIMA model is inappropriate, because it has unbounded variance (as the time span increases). That argument seems spurious to me: a linear trend has unbounded values, as the time span increases—and no one claims that that makes a linear trend inappropriate.

I do not know of a physical reason for expecting a significant increase in temperatures from CO₂ emissions—other than that such an increase is evidenced by physical climate models (which the post acknowledges). Earth’s climate system is too complex, with interacting nonlinear feedbacks, for the simple argument about CO₂-induced absorption to be realistically applicable. My opinion is that the only way to understand the system is to simulate it.

May 27, 2013 at 8:36 PM | Unregistered CommenterDouglas J. Keenan

Rhoda,

Examining http://www.woodfortrees.org/plot/hadcrut3vgl/mean:12 I struggle to see any real difference between 1900-1945 and 1946-2012. I'm not sure how seriously to take the pre-1900 data.

So what should we deduce from this observation, and from Doug Keenan's more sophisticated analysis? My conclusion is that there is little evidence in the recent temperature record which we can use to decide on a value for the climate sensitivity, and so we should in effect fall back on our personal Bayesian priors. Your prior is that the climate sensitivity is zero, and so that is your conclusion, which is fair enough. My prior is the no-feedback senistivity (a bit over 1K per doubling), and so that's my fall back conclusion.

May 27, 2013 at 8:48 PM | Registered CommenterJonathan Jones

Has the MET shown how their decision process at the time happened, how did they come to reject the better model? An all noise model, without trend, surely must have been one of the first to consider.

May 27, 2013 at 8:49 PM | Unregistered Commenterxyz

Doug,

A claim that the temperature increase alone is significant is foolish indeed. As far as I can see the only thing we can convincingly deduce from the modern temperature record is that the TCR isn't huge.

May 27, 2013 at 8:52 PM | Registered CommenterJonathan Jones

Jonathan - I like this:

http://youtu.be/SbwWL5ezA4g

May 27, 2013 at 8:57 PM | Unregistered Commenternot banned yet

nby, my internet connection here is too poor to actually watch video, but Randi is great. All scientists should learn a bit about stage magic.

May 27, 2013 at 9:11 PM | Registered CommenterJonathan Jones

Yep - and about "personal priors" :-)

May 27, 2013 at 9:19 PM | Unregistered Commenternot banned yet

Strictly, my prior is that climate sensitivity as some sort of predictable figure over decadal and greater timescales and regional, much less global dimensional scales is an unproven and may be an invalid concept. That may be logically equivalent to it being zero or quite small but I think it is worth preserving a distinction.

I do think that we would not be arguing over things like this if there was any proper evidence of AGW at all. That is, beyond hypothesis with no demonstration and apparently no ongoing effort to find anything more.

May 27, 2013 at 9:21 PM | Unregistered CommenterRhoda

Well, the temperature record as a random walk has been discussed before:

http://wattsupwiththat.com/2009/08/12/is-global-temperature-a-random-walk/

also:

http://wattsupwiththat.com/2011/02/14/pielk-sr-on-the-30-year-random-walk-in-surface-temperature-record/

May 27, 2013 at 9:55 PM | Unregistered CommenterJohn Silver

Are the temperature changes that occurred during the Medieval Warm period and the Little Ice Age random events devoid of an underlying cause?

May 27, 2013 at 10:10 PM | Unregistered CommenterBruce

Remember the unit root saga?
Already covered by Master Josh:

http://rankexploits.com/musings/2010/joshs-unit-root-cartoon-haiku/

the overly long discussion here:

http://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/

Oh, those were the days......

May 27, 2013 at 10:21 PM | Unregistered CommenterJohn Silver

The problem as I see it, is that the statistics depends on the assumptions of "persistence" and the model used.

I wrote a piece of Judith Curry's blog on this;

http://judithcurry.com/2012/02/19/autocorrelation-and-trends/

and frankly I wish I hadn't!

a) The likelihood of a 100 year positive linear trend is probably greater than 5% given the random inputs into the persistance model. This depends, however on the nature of the persistance. Nevertheless 50 year trends are to be expected on the basis of all reasonable persistances.

b) The data on which to calculate persistance is lousy - it's quite an eye opener to trawl through some of the "historical" databases. Experimenting with the HADCRUT record, I rapidly realised that i) the data was sparse for a one hundred year record and ii) quite subtle processing methods could influence the results and these needed to be understood before going any further. Estimating a persistance for the whole World is, as a Prof once remarked: "Dividing a f**t by a moonbeam and expressing the result to six significant figures. If interacting regions are considered, with different persistances, the whole thing becomes, in view, impossible.

c) There is no satisfactory physical model for persistance. To say that temperature conforms to a power law doesn't tell one anything particulaly useful - so do many things, or at least a power law can be fitted to them. So what? A multicompartment linear system will behave in a way that, with real, error containing data is indistinguishable from a power law model.

I applaud Dr (?) Keenan's energy in criticising the MET Office's statistical approach. Probably the simplest hypothesis is the best.

May 27, 2013 at 11:12 PM | Unregistered CommenterRC Saumarez

http://www.realclimate.org/index.php/archives/2013/05/unforced-variations-may-2013/comment-page-10/#comment-340785

May 28, 2013 at 12:37 AM | Unregistered CommenterEntropic Man

Entropic Man (EM): Gavin says "With the colloquial meaning, there is no doubt that recent trends are significant, and nothing the Met Office said about null hypotheses has anything to add since this 'significance' comes from the best estimates we have of natural variability (control runs of climate models, estimates of naturally forced responses and analyses of paleo-climate data, etc.)".

So, according to Dr Schmidt, we already know know enough about natural variability to rule it out as the main cause of the observed trend. I therefore assume EM agrees with this, though I'm not so sure everyone else here does... in fact, isn't this the whole point of the debate?

May 28, 2013 at 3:11 AM | Unregistered CommenterDave Salt

May 27, 2013 at 4:10 PM | FarleyR

Well said. If the future of CAGW comes down to abstruse arguments among statisticians then it is dead in the water. If Slingo cannot give an answer without resorting to abstruse arguments about statistics then she is of no further use in her position.

May 28, 2013 at 4:21 AM | Unregistered CommenterTheo Goodwin

May 28, 2013 at 3:11 AM | Dave Salt

Natural variability is not a cause. It is the range, from lowest value to highest value, of our recorded data. To treat natural variability as a cause is a category mistake in logic as all good Oxford graduates know.

To say that temperatures fall within natural variability is to say simply that we have seen them before and, therefore, they need no explanation. In other words, we knew that nature could do that before manmade CO2 came along.

To say that natural variability caused the rise in temperatures from 1980 to 1996 is a confusing way of speaking which could only mean that the rise in temperatures did not exceed known maximum values. Same for rate of increase.

May 28, 2013 at 4:34 AM | Unregistered CommenterTheo Goodwin

Sometime between 2001 & 2005, a report was published about the Little Ice Age, The conclusion contained a forecast about world temperatures. I urge everyone to read it, in order to follow the author's reasoning and the papers he references.

"Conclusion

Possibly human activity is an important factor in global warming since 1900. A simpler explanation would be that the Earth is still recovering from the Little Ice Age and that human activity is a minor factor in global climate change. For how long the climate will continue to warm and by how much is uncertain. If the past is any guide, the year 500 suggests a possible scenario (figure 2). Applied to the modern period, the scenario would give 100 years of warming from 1900 to the present, 50 years of cooling to 2050, 50 years of warming to 2100, followed by 300 years of cooling."

Written ay least 8 years ago, Fred Colbourne seems to have hit the nail on the head.

http://www.geoscience-environment.com/lia/lia.htm

May 28, 2013 at 5:18 AM | Registered Commenterperry

I am glad to see that these posts are slightly less euphoric than on the previous thread. I think a number of people are misunderstanding the point of that Doug Keenan is making, and are somewhat overestimating its impact.
That the world has been in general warming over the last 160 years is not in question.
However, any statistical test on the temperature series related to whether that warming is “different from what is expected” requires a clear statement of what is expected. One cannot say, as the Met Office did, that the warming is statistically significant without declaring AND DEFENDING the assumptions of the underlying model for how temperature should have progressed. Without belittling what Doug has achieved, the only thing he has done is to show that the Met Office statement was poor science. He has not shown that AGW is not happening, and he has not shown that the Earth has not been warming over the last 160 years, with apologies for the double negative. Simply stated, the absence of evidence is not the same as the evidence of absence.
As Doug Keenan himself notes, the driftless ARIMA (3,1,0) model is itself very difficult to justify as a bonafide long-term null hypothesis because it is unbounded in temperature. (We shouldn’t exist if this model were valid.) I disagree with Doug when he says the same complaint can be leveled against the linear trend model applied by the MET. The null hypothesis under this model is that the long-term temperature anomaly stays flat. It is only the alternative hypothesis which is unbounded. However, I would fully support the view that the null hypothesis of flat temperature with first order autoregressive error is patently absurd given the data, and that the linear trend model is mis-specified on the basis of any testing.
So this is not a simple matter of saying that that model is wrong and this one is right. Both models are indefensible for different reasons. We are left with the simple fact that we cannot carry out a statistical test on the temperature series for the unexpected if we don’t know what the expected is.

May 28, 2013 at 7:08 AM | Unregistered CommenterPaul_K

Paul_K

That tallies pretty much with my understanding. Once again we have climate scientists unwilling to admit their ignorance. We can say that the temperature has gone up, but we cannot detect any anomalous behaviour because, as you say, we can't define what the normal behaviour of the temperature series.

The temperature rise is currently indistinguishable from natural variation.

May 28, 2013 at 7:26 AM | Registered CommenterBishop Hill

Paul_K - re: "euphoria", on my part it is primarily that Doug has stuck it to the Met Office and, IMO, shown them to be both incompetent and obstructionist.

As far as your double negative goes, if you can show that AGW is happening, please do so.

May 28, 2013 at 7:28 AM | Unregistered Commenternot banned yet

PaulK,
"However, I would fully support the view that the null hypothesis of flat temperature with first order autoregressive error is patently absurd given the data, and that the linear trend model is mis-specified on the basis of any testing."
Could you unpack this? Are you saying that flat temperature is absurd? Or AR(1)? How is it mis-specified?

It's not clear to me who is supposed to be asserting that a linear trend model since 1850 is appropriate. Any links or quotes? Generally the AGW view is that the temperature rise should have been related to the forcing, probably with some lag.

May 28, 2013 at 8:25 AM | Unregistered Commenternick stokes

Bish,
For what it is worth, I think that the issue entails a bit more than
"scientists unwilling to admit their ignorance".

"Hoist by their own petard" springs to mind.

I suspect, but cannot prove, that the Met Office was forced into this embarrassing position because of its perceived need to retain coherence in other elements of its AGW story.

Specifically, the "unit root" in the temperature series arises solely and unequivocally from the multidecadal oscillations. If the series is intelligently decomposed to eliminate the 22-year Hale cycle and the quasi 60 year cycle, then the residual trajectory shows no evidence of a unit root. The Met Office would have a rather stronger case if it had done this, and then analyzed the residual trajectory, since Doug's main argument about mis-specification of the linear trend model then largely disappears.

The Met Office would still have to defend against the assertion that the residual trajectory is actually just the upturn of a lower frequency (multicentennial) cycle, but that can be dealt with as a separate question. I suspect that the Met Office knows all of the above, but they could not formulate any argument involving abstraction of the multidecadal cycles without breaking ranks with all of the other GCM laboratories. The developers of GCMs have denied the existence of the 60-year cycle and have banked the associated temperature gain in the late 20th century as part of the consequences of AGW. Previous ups and downs have been "explained" in the models by tuning aerosol forcings.

Hence to do a statistical analysis of the temperature series that involved any specific and overt recognition of the multidecadal cycles would require that the Met Office admitted that its AOGCM was seriously flawed.
Hoist by its own petard, as I said.

May 28, 2013 at 8:50 AM | Unregistered CommenterPaul_K

@Paul_K

I disagree with Doug when he says the same complaint can be leveled against the linear trend model applied by the MET. The null hypothesis under this model is that the long-term temperature anomaly stays flat. It is only the alternative hypothesis which is unbounded.

Well, not quite. To establish that you also need establish something about the auto-correlation coefficient in the AR1 process.

And what we know on that score is that the process appears to be non-stationary. So no suprise that an AR1 find a linear trend where an integrated model does not.

And that is the message here. Whenever long term tmeperature chart is placed in front of you and it you note that it "goes up" over time, you are looking at what the statistical evidence suggests is a spurious trend.


This is a big deal, because it is throwing the ball back into the alarmist camp. No longer can they claim that a "0.8 degree rise in temperatures" in itself is proof of anything. They need to build structural model that can find the evidence for the causal process (CO2 causing higher temperatures).

In other words, to satisfy our need for a structural model that can explain the variation, but provide other required qualities, such as being bounded by temperature, we need to keep looking for the correct integrating processes. At the moment we haven't found them.

May 28, 2013 at 9:03 AM | Unregistered CommenterGeckko

Nick Stokes,
See here. The Met Office use a linear AR(1) model to state that the temperature rise from 1880 was statistically significant.
http://www.bishop-hill.net/blog/2012/11/10/parliamentarians-do-statistical-significance.html

Yes, I am saying that a null hypothesis of flat temperature with an AR(1) error variance is absurd.
The mis-specification in the linear trend plus AR(1) manifests itself as substantial persistent cyclic deviations in the residuals. The fit fails homoscedasticity among other things.

May 28, 2013 at 9:15 AM | Unregistered CommenterPaul_K

Maybe it's time for some double-blind testing of the Met's models to see if they (with their expert interpreters) truly can see the gold for the dross.
How difficult would that be to set-up, and would the Met Office partake in such an experiment?

May 28, 2013 at 9:21 AM | Unregistered Commentertckev

To those who wonder what the big deal here is, picture in your mind the Keeling cruve of rising CO2 concetrations in the atmosphere.

That is your linear trend. You could subsitite an actual linear trend with little statistical difference.

If you do not correctly model the noise process (e.g. use an AR1 model) the CO2 curve, or Linear Trend will pick out the observed


I think this is the most conclusive obvious evidence to suggest that the GCM are doing exactly that. They have lots of great structural physics, but completely fail on the dynamic proceess that leads to noise being badged as trend.

The fact that those GCMs have clearly over estimated the potential warming over the last 10-15 years says it all.


But what is worse is how they responde to the clear shortcomings. The modelelers, which tend to be alarmists, search for other "intergrating variables". That is, if the the models run too hot, instead of considering that they have misbadged noise as trend (which would require them to dial down the sensitivity to CO2) they search for a missing variable that will offset. In the 70s they used aerosols. Since then they have employed clouds for that function and now they appear to be looking to ocean heat take up.

Doug Keenan's result is screaming at them to spend more time correcting the dynamics and noise processes - i.e. the naural variation - because it looks like they are doing everything they can to find a way to keep a spurious trend badged as a real physically caused trend.

May 28, 2013 at 9:36 AM | Unregistered CommenterGeckko

Isn't this all a bit premature?

Until the Met Office specifically says in words the man on the Clapham Omnibus could understand that the temperature rise since 1850 is not statistically significant, it's all just interpretation, and the alarmists will simply dismiss it as "deniers' interpretation" while continuing with their narrative of the data showing that human activities are causing dangerous warming.

May 28, 2013 at 9:39 AM | Unregistered CommenterTurning Tide

Paul_K
Yes, they do go back to 1880. And it's true that better models could be sought, over the longer range, and would improve the significance. The Foster/Rahmstorf paper is a step in this direction.

Still, the significance levels here are very high. It is hard to imagine that heteroskedacity would undo them.

But I note that they also say:
"Statistical analyses and modelling of the global temperature record have shown that, because of natural variability in the climate system, a steady warming should not be expected to follow the relatively smooth rise in greenhouse gas concentrations. Over periods of a decade or more, large variations from the average trend are evident in the temperature record and so there is no hard and fast rule as to what minimum period would be appropriate for determining a long-term trend."

That's focussed on the minimum period, but emphasises that calculating a trend and its significance is not an assertion of linearity.

May 28, 2013 at 9:45 AM | Unregistered Commenternick stokes

BS: "The temperature rise is currently indistinguishable from natural variation."

This is important because if the whole of the signal seen could be natural variation then the "first order" simplest explanation of the climate is that "it is natural variation" and anything else is a refinement to this.

To understand why this is important, we need to understand the basic climate researcher argument:

1. Everything has to be explained
(this is not stated explicitly but is their basic philosophy that everything should be explained)
2. that "it cannot be explained as natural variation".
3. Therefore, even when we know that CO2's effect is far too small, that because something has to have done it and CO2 is the only suspect they offer, they introduce ideas to scale up this effect and because there is only one main suspect, even though they don't know how that suspect caused the rise, the fact the rise occurred is supposed to prove that suspect guilty.

However, this admission by the Met Office undermines this argument. So the sceptic argument goes:

1. There is natural variation. (Not stated explicitly but it is a basic philosophy of sceptics)
2. That the temperature signal could be produced in its entirety by natural variation.
3. So, natural variation is all that is needed to explain the climate signal.

But in addition sceptics also say

a. That because of the nature of CO2, although this is not necessary to explain the rise, they would in addition to natural variation expect a small rise in temperature due to CO2.
b. There may be feedback mechanisms.

Paradoxically, the real bone of contention is not whether natural variation is sufficiently large enough to explain climate change. From my research, the actual argument is between these two statements:

1. Everything has to be explained scientifically
2. There is natural variation

Because natural variation is really a statement that we can know the scale of change without knowing the detailed explanation for that change. It is a statement that we can model the climate and to some extent predict its behaviour without explaining why that change is occurring. So it is really challenging the philosophy of scientists that everything can and should be explained scientifically.

These different views really stem from the different environments of academia and commercial engineering. In commercial engineering decision have to be made on the available evidence. Therefore as they are never going to know everything, they have to model as much as they can about what they don't know even when they don't know the detail. So even when they do not know why things are changing if they know the scale, that is seen as a good tool in decision making. So those coming from this environment find it very easy to accept the concept of "natural variation".

In contrast, academia is on a inter-generational crusade to understand everything. As such, it tends to see "natural variation" as a lazy man's excuse. It is seen as a non-explanation, as a failure to get to grips with the inner workings of the universe. And they have a point, because it is only by not being satisfied by glib statements that "life's like that", that our society has pushed forward the frontier of understanding.

So who is right?

If we are trying to push back the understanding of the climate - then yes, "natural variation" is not a helpful concept and I can understand why academia has tried to repress this as an explanation of recent climate change. However, this hasn't removed the big hole as just as much is unknown (as the failure of the climate models show). All it has done is changed the concept of "not knowing" from natural variation of the climate as a whole, to not knowing how the (presumed) feedbacks work. Just as much is unknown ... it is just modelled in a different way.

Climate researchers
Temp = CO2-effect x UNKNOWN

Sceptics
Temperature = UNKNOWN + CO2-effect

The whole of this debate comes down to whether "Unknowns" are seen as adding to the CO2 effect or multiplying its effect. And also note, that CO2 in the second is the "also ran".

Scientists should leave decisions to professional decision makers

If however, we are trying to make a decision about whether we should act on CO2, then scientists should stop interfering in areas where they really have no professional competence and leave it up to professional decision makers who have great experience using science to make decisions. The vast majority of sceptics fall into this category and for those people "natural variation" is a perfectly sensible, valid and useful means by which to make our decisions.

So, I think the real argument is between these two statements:

1. Everything has to be explained scientifically based on known changes like CO2 (And this idea rejects "natural variation" as what is unknown is precise relationships not).

2. We can make decisions on the climate without detailed scientific knowledge using pseudo-scientific models like natural variation which provide are provenly good decision making tool for systems where we do not know everything.

May 28, 2013 at 9:59 AM | Unregistered CommenterMikeHaseler

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>