Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Lessons from the shop floor | Main | The debate at the FST »
Thursday
Jul032014

Where there is harmony, let us create discord

My recent posts touching on statistical significance in the surface temperature records have prompted some interesting responses from upholders of the climate consensus, with the general theme being that Doug Keenan and I don't know what we are talking about.

This is odd, because as far as I can tell, everyone is in complete agreement.

To recap, Doug has put forward the position that claims that surface temperatures are doing something out of the ordinary are not supportable because the temperature records are too short to define what "the ordinary" is. In more technical language, he suggests that a statistically significant rise in temperatures cannot be demonstrated because we can't define a suitable statistical model at the present time. He points out that the statistical model that is sometimes used to make such claims (let's call it the standard model) is not supportable, showing that an alternative model can provide a much, much better approximation of the real world data. This is not to say that he thinks that his alternative model is the right one - merely that because it is so much better than the standard one, it is safe to conclude that the latter is failing to capture a great deal of the variation in the data. He thinks that defining a suitable model is tough, if not impossible, and the only alternative is therefore to use a physical model.

As I have also pointed out, the Met Office does not dispute any of this.

So, what has the reaction been? Well, avid twitterer "There's Physics", who I believe is called Anders and is associated with Skeptical Science, tweeted this:

Can clarify their position wrt statistical models - in a way that might understand?

A response from John Kennedy appeared shortly afterwards, which pointed to this statement, which addresses Doug Keenan's claims, noting that there are other models that give better results and suggesting that the analysis is therefore inconclusive. Kennedy drew particular attention to the following paragraph:

These results have no bearing on our understanding of the climate system or of its response to human influences such as greenhouse gas emissions and so the Met Office does not base its assessment of climate change over the instrumental record on the use of these statistical models.

I think I'm right in saying that Doug Keenan would agree with all of this.

Anders has followed this up with a blog post, in which he says I don't understand the Met Office's position. It's a somewhat snide piece, but I think it does illuminate some of the issues. Take this for example:

Essentially – as I understand it – the Met Office’s statistical models is indeed, in some sense, inadequate.

Right. So we agree on that.

This, however, does not mean that there is a statistical model that is adequate.

We seem to agree on that too.

It means that there are no statistical models that are adequate.

Possibly. Certainly I think it's true to say that we haven't got one at the moment, which amounts to the same thing.

Then there's this:

[Statistical models] cannot – by themselves – tell you why a dataset has [certain] properties. For that you need to use the appropriate physics or chemistry. So, for the surface temperature dataset, we can ask the question are the temperatures higher today then they were in 1880? The answer, using a statistical model, is yes. However, if we want an answer to the question why are the temperatures higher today than they were in 1880, then there is no statistical model that – alone – can answer this question. You need to consider the physical processes that could drive this warming. The answer is that a dominant factor is anthropogenic forcings that are due to increased atmospheric greenhouse gas concentrations; a direct consequence of our own emissions.

Again, there is much to agree with here. If you want to understand why temperature has changed, you will indeed need a physical model, although whether current GCMs are up to the job is a moot point to say the least. (I'm not sure about Anders' idea of needing a statistical model to tell whether temperatures are higher today than in 1880 - as Matt Briggs is fond of pointing out, the way forward here is to subtract the measurement for 1880 from that for today - but that's beside the point).

All this harmony aside, I hope you will be able to see what is at the root of Anders's seeming need to disagree: he is asking different questions to the one posed at the top of this post. He wants to know why temperatures are changing, while I want to know if they are doing something out of the ordinary. I would posit that defining "the ordinary" for temperature records is not something that can be done using a GCM.

I think Anders' mistake is to assume that Doug is going down a "global warming isn't happening" path. In fact the thrust of his work has been to determine what the empirical evidence for global warming is - when people like Mark Walport say that it is clear that climate change is happening and that its impacts are evident, what scientific evidence is backing those statements up? I would suggest that anyone hearing Walport's words would assume that we had detected something out of "the ordinary" going on. But as we have seen, this is a question that we cannot answer at the present time. And if such statements are supported only by comparisons of observations to GCMs then I think words like "clear" and "evident" should not be used.

PrintView Printer Friendly Version

References (1)

References allow you to track sources for this article, as well as articles that were written in response to this article.
  • Response
    Response: Comment
    This is an attempt at a response to Andrew’s post Andrew, you say, I think Anders’ mistake is to assume that Doug is going down a “global warming isn’t happening” path. In fact the thrust of his work has been to determine what the empirical evidence for global warming is Well, ...

Reader Comments (307)

Nullius in Verba said the following.

You can, of course, chuck the data into your favourite stats software and ask it to calculate the OLS fit and standard error. A lot of people who have only been on an introductory stats course do - I blame the lecturers.

I definitely agree that it is the lecturers who are primarily to blame. Most of the current mess in global warming would not have happened if academic statisticians had done their job properly when teaching undergraduates. To some extent, I feel sorry for the climatologists, most of whom are well-intentioned, but have little clue about statistics. Here is an extract from my critique of AR5 statistical analyses.

Imagine that you had earned a Ph.D. in climatology: that takes about five years of hard work, on very tiny pay. Then you worked hard for decades more, earned respect from your peers, and essentially founded your professional identity on being an expert in the study of the climate system. And now, someone comes along and tells you that most of the work you and your colleagues have done during your careers is invalid, due to a statistical problem. How would you respond? Would you say “Oh, that’s nice—thanks for letting me know”?

What makes this especially annoying is that the statistical principles that they need to know are easy.


About ATTP, I told Andrew Montford several days ago that I believe ATTP is a troll.

Jul 8, 2014 at 7:39 AM | Unregistered CommenterDouglas J. Keenan

Doug,


About ATTP, I told Andrew Montford several days ago that I believe ATTP is a troll.

Now that is a surprise.


And now, someone comes along and tells you that most of the work you and your colleagues have done during your careers is invalid, due to a statistical problem.

Shall we expand this a little. The "someone" appears to be an ex financial sector mathematician who seems to think that because of the high salaries, this sector can attract very bright people, inferring - I assume - that this applies to themselves (a possible correlation/causation problem here) . They appear to have taught themselves statistics and have now concluded (without any reference to physics whatsoever) that all - or most - of the work in climate science is wrong. This appears to be based on an analysis that cannot actually tell us anything about the underlying processes associated with climate science. Now what's more likely : thousands of people, some of whom have spent their careers working in this field, are wrong, or one plucky self-taught statistician who appears to have no understanding of basic physics is right? I'll leave that as an exercise for the readers.


What makes this especially annoying is that the statistical principles that they need to know are easy.

Yes, they are. That's another reason why climate scientists are probably not wrong.

Jul 8, 2014 at 8:04 AM | Unregistered CommenterAnd Then There's Physics

"I tried to post this:

http://andthentheresphysics.wordpress.com/2014/07/05/adventures-on-the-hill/#comment-25970"

Good luck with that, ATTP in common with her heroes savagely censors dissent through a naughty invisible friend called "Rebecca". It must be a novel experience for her being on a site where dissent isn't censored and argument encouraged.

"You're using a purely statistical model to CLAIM that the rise is spurious. You really can't make that claim without some physical model of how this rise could occur. I thought we'd agreed on that."

Is that what NIV is doing? I understood him to be demonstrating that a statistical model could be constructed that showed the rise in temperature was spurious. I don't believe he claimed that to be true, in fact he said: "It tells us (if you believe it) that the observed rise is spurious."

So I guess what it boils down to is whether anything meaningful can be said about the climate from the current physical models and you're saying it can. Is that right? If it is then it would be nice to know why you believe that.

Jul 8, 2014 at 8:13 AM | Registered Commentergeronimo

NIV,

You are proof Al Beebs decision to "censor" anyone off the airwaves if they don't believe in the religion of Mann Made Global Warming (tm) is a catastrophic.

I would suggest that in the last 6 pages has been more illuminating scientific discussion for even us "ignorabts" to understand than has ever been published in combined total at real climate and sceptical science (actually we probably passed that about 1/10th of the way down the first page of this thread).

Doug,

I wouldn't be as charitable as you have about Anders. He does come across as your bog standard catastrophiliac incapable of understanding anything that goes against his religion which probably explains why he keeps answering questions that were never asked.

Regards

Mailman

Jul 8, 2014 at 8:39 AM | Unregistered CommenterMailman

Geronimo,
I can't find that comment anywhere.


So I guess what it boils down to is whether anything meaningful can be said about the climate from the current physical models and you're saying it can. Is that right? If it is then it would be nice to know why you believe that.

It's not so much belief as basic physics, but this isn't really the point. It doesn't actually matter for this discussion whether or not we can say anything meaningful about the climate from current physical models; what matters is whether or not we can say anything meaningful without physical models. Since we can't, arguing that the rise is spurious (or that we don't know if the rise is a consequence of anthropogenic forcings or not) using satistical models only, is logically inconsistent. As I think I may have said before, I fail to see how this isn't relatively obvious.

Mailman,


explains why he keeps answering questions that were never asked.

I'd stop if people didn't keep saying things that appear to contradict what they'd just said.

Doug,
Maybe I'll highlight and slightly amend this quote from your AR5 critique :


And now, someone comes along and tells you that most of the work you ... have done ... is invalid, due to a statistical problem. How would you respond? Would you say “Oh, that’s nice—thanks for letting me know”?

Jul 8, 2014 at 9:15 AM | Unregistered CommenterAnd Then There's Physics

Mailman: NiV's contributions on radiative-convective greenhouse theory from 23rd May were even better (the next was here and it's well worth scrolling down for the others). We're lucky to have him.

Jul 8, 2014 at 10:13 AM | Registered CommenterRichard Drake

> This is all an argument to demonstrate that using trend+AR(1) to claim "significance" is unjustified.

So Nullius pushes the pea back under the "pure stats" thimble again.

Douglas' argument is not only that the significance claim is unjustified, but that the very choice of trend+AR(1) is unjustified. This claim is followed by some editorial content about accountability, and even science at the end of Douglas' WSJ op-ed. This accountability does not seem to apply to the Douglas himself, since Douglas can't even answer simple questions about his release of Muller's email and his follow-up on McNeal's email.

To focus on significance (which MattStat showed was insignificant) minimizes what's at stake. To see it, let's compare it to another wording of the "pure stats" moment of that pea and thimble game:

It is simply a counterexample to the IPCC/MO claim that trend+AR(1) is the better fit to the data.

This represents Douglas' master argument a bit better, as it indicates that the main claim is about the choice of a model, not just a claim of significance. So let me get Douglas' argument. Unless one can prove that it's impossible to find a counterexample of a better fit to the data, any choice of model is unjustified. Is that what Douglas and Nullius are arguing here?

A "yes" would suffice. A "no" would not. A "no" would need to be padded with "here's my or Douglas' argument", followed by an argument, not textbook platitudes.

***

If we can agree that the central question is to decide which statistical models to choose, that AR(1) has limitations, and that our models should exhibit physical realism, to argue for random walks is simply inconsistent. No theory is supposed to stand tall against absurd or degenerate testing.

Jul 8, 2014 at 2:30 PM | Unregistered Commenterwillard

"That is what I asked. Didn't you read what I wrote?"

No it isn't and yes of course I did. You only stipulated the model in the case of the errors, not the trend.

"I saw this analogy of yours, but noone's trying to claim that 6 is the biggest number (that would be particularly stupid)."

The Met Office claim was how the whole argument started. And yes, of course it was stupid. That's the point.

"No, I don't believe that and I doubt anyone else with any understanding of climate science would believe it either."

Neither do I, nor does the Bishop, and neither I strongly suspect does Doug.

"This is the fundamental point. You're using a purely statistical model to CLAIM that the rise is spurious."

How many times do I have to write it?!!

NO. I'M. NOT.

I'm not making any such claim, I've stated baldly, in as clear a way as I can possibly think of, we are NOT making any claims regarding the spuriousness or otherwise of the rise. We are doing the exact opposite: - we are stating that it is NOT POSSIBLE to do so with the means available.

Sheesh! What does it take...?!

--

"Now what's more likely : thousands of people, some of whom have spent their careers working in this field, are wrong, or one plucky self-taught statistician who appears to have no understanding of basic physics is right? I'll leave that as an exercise for the readers."

Hands up, class, who can spot the 'ad populam' fallacy? :-)

--

" Since we can't, arguing that the rise is spurious (or that we don't know if the rise is a consequence of anthropogenic forcings or not) using s[t]atistical models only, is logically inconsistent. As I think I may have said before, I fail to see how this isn't relatively obvious."

Of course it's obvious! It's what we've been saying to you for the past couple of days!
What I can't understand is why you still think we're saying anything different!

--

"Unless one can prove that it's impossible to find a counterexample of a better fit to the data, any choice of model is unjustified. Is that what Douglas and Nullius are arguing here?"

No. It is *always* possible to find a counter-example model that is a better fit to the data. But that does *not* mean that any choice of model is necessarily unjustified. To be justified, a model first has to be validated. And to validate a model in the absence of the ability to perform controlled experiments, it has to be a validated physical model.

And climate science doesn't have one.

--

"If we can agree that the central question is to decide which statistical models to choose, that AR(1) has limitations, and that our models should exhibit physical realism, to argue for random walks is simply inconsistent."

You are aware, aren't you, that the classical 'random walk' is on the boundary of the AR(1) class of models, and can be approximated arbitrarily closely with one? :-)

But as I pointed out earlier, the linear trend is physically impossible too. And yet you people persist in trying to fit it to the data. Are you degenerate testers too?

Jul 8, 2014 at 10:29 PM | Unregistered CommenterNullius in Verba

Nullius,


No it isn't and yes of course I did. You only stipulated the model in the case of the errors, not the trend.

Yes, I could have written it more carefully. I was hoping that we might avoid pedantry. I might be hoping in vain though. What I was really trying to point out (as I've already mentioned) is that there is a difference between analysing data, and determining what that data says about the real world.


How many times do I have to write it?!!

NO. I'M. NOT.

I'm not making any such claim, I've stated baldly, in as clear a way as I can possibly think of, we are NOT making any claims regarding the spuriousness or otherwise of the rise. We are doing the exact opposite: - we are stating that it is NOT POSSIBLE to do so with the means available.

Sheesh! What does it take...?!


Well Doug very certainly is. You also said this:

It tells us (if you believe it) that the observed rise is spurious; that there is no underlying deterministic trend, only the chance accumulation of random noise. It tells us that even after a century it's still weather, not climate.

If you're not using a statistical model to claim that we can't tell if the warming is natural or not, could you please stop saying things that makes it seem that you are.

Tell you what, if you really agree that one can't use a statistical model to make any claims about whether the warming is natural or not, you could prove that by simply saying "Doug Keenan is wrong".

Jul 9, 2014 at 7:38 AM | Unregistered CommenterAnd Then There's Physics

I think it is time for the old Rutherford quote:

If your experiment needs statistics, you ought to have done a better experiment.

ATTP - please can you identify the CO2 signal in this graph:

Alley/Lappi GISP2 graph with Hadcut4gl appended

Note that in geological timescales, the MWP, the Roman WP, The Minoan WP and the Holocene Optima are all just inter-glacial weather, as illustrated by the Vostok data. The late 20th Century 'warm' period was just a continuation of the long slow thaw from the LIA, compounded by a run of mild winters in the NH (and a very gullible and scientifically illiterate and historically ignorant mainstream media), some dodgy station selection, under-estimated UHI> and many very spurious homogenisation and adjustments to the surface station data. Just face it - we have increased atmospheric CO2 from 290ppm to 400ppm and there has been no measurable effect in global average temperatures, ergo CO2's radiative properties have a very minor role in the climate system (and Lindzen was right all along).

Time for another quote to put all this climate policy insanity and scientific and statistical obfuscation in context, this time from Lomberg:

"We live in a world where one in six deaths are caused by easily curable infectious diseases; one in eight deaths stem from air pollution, mostly from cooking indoors with dung and twigs; and billions of people live in abject poverty, with no electricity and little food. We ought never to have entertained the notion that the world’s greatest challenge could be to reduce temperature rises in our generation by a fraction of a degree."

Jul 9, 2014 at 8:46 AM | Unregistered Commenterlapogus

Over at wotty's own blog I've drawn attention to the statistical blunder made by in the Nature paper by Mora et al (recently pointed out by Hawkins et al) and the recommendations from the Oxburgh report that climate scientists should work more closely with statisticians.

Jul 9, 2014 at 9:51 AM | Registered CommenterPaul Matthews

Paul,
You did indeed. And, as you probably could tell by my response, I agree that Mora et als. uncertainty estimate is wrong. I have no idea why they thought the error on the mean was the right way to determine the uncertainty interval. I suspect most others who comment on my blog would agree. I also agree that closer work with statisticians may well be a good thing, but am unconvinced that it will be some kind of panacea. Also, I think - as Ed Hawkins has illustrated - that an assumption that climate scientists are poor statisticians would not be correct.

Here I am, however, trying to point out the flaws in Doug Keenan's statistical analysis and appear to be failing to get agreement here, despite people agreeing that you need physical models (which Doug Keenan ignores) to understand the evolution of the planet's surface temperature.

Jul 9, 2014 at 10:26 AM | Unregistered CommenterAnd Then There's Physics

Jul 8, 2014 at 8:39 AM | Unregistered CommenterMailman
Agreed. I have followed this thread with great interest, keeping my head well down since it is — to use the usual phrase — well above my pay grade.
One comment I do feel qualified to make as someone whose entire working life has been built one way or another on the use (and occasionally abuse!) of the English language, if ever there is a Nobel Prize in Waffle,Obfuscation and Wriggling on the Hook ATTP will be a prime candidate.
Though, as ever, I do appreciate anyone from the 'Dark Side' prepared to put their head into the sceptic lions' den!

Jul 9, 2014 at 11:24 AM | Registered CommenterMike Jackson

Mike,


if ever there is a Nobel Prize in Waffle,Obfuscation and Wriggling on the Hook ATTP will be a prime candidate.

Okay, nice and simple then. Doug Keenan is wrong. Everyone agree?

Jul 9, 2014 at 11:27 AM | Unregistered CommenterAnd Then There's Physics

"Tell you what, if you really agree that one can't use a statistical model to make any claims about whether the warming is natural or not, you could prove that by simply saying "Doug Keenan is wrong"."

Hmmmm. Wrong about what? Let's have a look at what Doug says...

"(To emphasize—I have never advocated adopting any particular statistical model for drawing inferences from climatic data.)"

i.e. Doug denies having said that one can use a statistical model to make any claims about whether the warming is natural or not.


"The Met Office seems to be in broad agreement with His Eminence and myself in believing that we currently do not know how to choose a statistical model—ergo, we cannot do statistical analysis."

i.e. one can't use a statistical model to make any claims about whether the warming is natural or not.

"1. There is no observational evidence for significant global warming, due to any cause—natural or anthropogenic. (Claims to the contrary are based on insupportable statistical analyses.)"

i.e. There are no valid claims about whether the warming is natural or not because one can't use a statistical model to make any claims about whether the warming is natural or not. Such claims as have been made are insupportable.

"Again, all of this is correct. In stating these things, the section is presenting the basics of the statistical situation reasonably fairly.

Additionally, §10.2.2 states this: “Trends that appear significant when tested against [the statistical model used in Chapter 2] may not be significant when tested against [some other statistical models]”. Thus, §10.2.2 effectively acknowledges that the statistical model used in Chapter 2 should not have been relied on.

So, what statistical model does §10.2.2 choose? None. That is, §10.2.2 effectively acknowledges that we do not understand the data well enough to choose a statistical model. It does that even though it also acknowledges that choosing such a model is required for drawing inferences.
The conclusion is thus clear: it is currently not possible to draw inferences from the series of global temperatures. This conclusion is extremely important. It should have been stated explicitly, and it should have been noted in the Executive Summary of Chapter 10.

Although this critique is focused on surface temperature observations, the same statistical criticism applies to other claims of observational evidence for significant global warming. Simply put, no one has yet presented valid statistical analysis of any observational data to show global warming is real. Moreover, that applies to any warming — whether attributable to humans or to external natural factors, such as the sun. This is implied by §10.2.2, and indeed it is clear from the statistics."

i.e. one can't use a statistical model to make any claims about whether the warming is natural or not, the IPCC quietly agree, and the same principle applies more generally. There is no observational evidence of an abnormal change because all such claims to date have relied on unvalidated statistical models.

"There seems to be only one scientist who has seriously attempted to answer the crucial question, i.e. to choose a statistical model. That scientist is Demetris Koutsoyiannis, at the National Technical University of Athens. Koutsoyiannis has not (yet) found a viable answer to the question; at least, though, he has tried to. No other researcher has tried, to my knowledge."

i.e. Even Koutsoyiannis' fGn model has not been shown to be sufficient.

"A leading statistician in the U.S. said the following, in an e-mail to me. "My sense is that the observed time series is not sufficiently long to cleanly distinguish among various time series models, nor to definitively demonstrate man - made warming versus natural cycles versus (for some models) a mostly flat trend." Indeed, that should be obvious to anyone who has reasonable skill at the analysis of time series. It is only true, however, if we are considering purely-statistical analyses. Generally, though, analyses of data should incorporate some knowledge of the application area: in this case, the physics of the climate system. That is, we should try to use physics to constrain the set of candidate models. That strategy has also been suggested by a statistician at the Met Office, Doug McNeall. Although that strategy is clear and arguably necessary, implementing it seems to be extremely difficult. The only researcher who has attempted implementing it, as far as I know, is Koutsoyiannis."

i.e. one can't use a statistical model to make any claims about whether the warming is natural or not, a physical model is needed, but it's a hard problem.

Given that Keenan evidently agrees with your statement, and has said so repeatedly, what is it he is supposed to be 'wrong' about?

No, on second thoughts, never mind. I think it's about time to give up on this one!
It's been a pleasure talking with you. :-)

Jul 9, 2014 at 8:38 PM | Unregistered CommenterNullius in Verba

> It is *always* possible to find a counter-example model that is a better fit to the data. But that does *not* mean that any choice of model is necessarily unjustified. To be justified, a model first has to be validated . And to validate a model in the absence of the ability to perform controlled experiments, it has to be a validated physical model.

And so the pea comes back under the "random physics" thimble.

So unless we have a validated physical model, we can't justify our choice of model. It just happens that nobody has a validated physical model.

It's not that any choice of model is not necessarily unjustified, it's just that nobody ever found the only thing that would justify the choice of a model, according to Douglas.

And unless Douglas finds such justification, Douglas can ask if climate studies form a science:

Making the right choice, the one that best corresponds to physical reality, requires further, difficult research, and accepting conclusions based on shaky premises risks foreclosing upon such work. That would be gross negligence for a field claiming to be scientific to commit.

http://www.informath.org/media/a41.htm

And thus a third thimble is being introduced: let's call that one "anti-science".

***

But wait. If "it is *always* possible to find a counter-example model that is a better fit to the data," as Nullius says, how does testing against random walks help to find that best model? More importantly, let's recall Douglas' claim:

The improved fit does tell us that until more research is done on the best assumptions to apply to global average temperature series, the IPCC's conclusions about the significance of the temperature changes are unfounded.

If "it is *always* possible to find a counter-example model that is a better fit to the data," as Nullius says, how does Douglas' conclusion follow exactly?

Jul 9, 2014 at 11:37 PM | Unregistered Commenterwillard

WFC

The Earth is not a closed system. Energy enters as solar insolation. Some is reflected back to space due to albedo. The rest is absorbed, has various effects and may be stored.
It then leaves to space as outward long wave radiation.

James Jessop

I note that advocates of an ocean heat content energy based metric to replace surface temperatures include the sceptic Roger Pielke sr.

Jul 10, 2014 at 12:14 AM | Unregistered CommenterEntropic man

EM - is there any reason to regard the thermal capacity of the ocean as being less than infinite? (For practical purposes, of course.)

Jul 10, 2014 at 8:16 AM | Registered CommenterMartin A

@ willard, Jul 9 at 11:37 PM

The quotes that you cite are from an op-ed piece that I published in the Wall Street Journal. Op-ed pieces are edited by the newspaper’s editors, and authors do not have as much control over the exact wording as authors do with, say, peer-reviewed literature. In this case, the word that you emphasize, “best”, was not written by me. (I think that I might have approved the change, but only because we had extended discussions about many of the edits and I got weary; I was apparently much more pernickety than most authors.)

The week that the op-ed piece was published, I also put a Director’s Cut on my web site, at
http://www.informath.org/media/a42.htm
That does not contain the word that you emphasize.

Newspapers do things differently than peer-reviewed journals: in some ways better, in some ways worse. As an example of a better way, the figure showing global temperatures was not drawn by me. Rather, I supplied the data and they drew the figure. That is standard procedure for WSJ. The procedure implies that figures are almost certain to be fair representations of the data and that the data is available to anyone who later wants it. No peer-reviewed journal has such high quality control, AFAIK.

Jul 10, 2014 at 8:58 AM | Unregistered CommenterDouglas J. Keenan

NiV (8:38 PM): A gracious ending that I could not easily have emulated. :)

Doug (8:58 AM):

The procedure implies that figures are almost certain to be fair representations of the data and that the data is available to anyone who later wants it. No peer-reviewed journal has such high quality control, AFAIK.

A remarkable point in favour of newspapers (the WSJ, at least) over peer-reviewed journals that I was was unaware of, thanks. That should cause tremors at climate openness central. (If there was a climate openness central. The IPCC doesn't exactly fit the bill, sad to say, but it really should.)

Jul 10, 2014 at 9:58 AM | Registered CommenterRichard Drake


is there any reason to regard the thermal capacity of the ocean as being less than infinite? (For practical purposes, of course.)

Yes. In fact, if you assume that the ocean absorbs a fraction of the energy excess that is the same as its fraction of the heat capacity of the entire system (i.e., about 500 times greater than the heat capacity of the ice/atmoshere/land) the equilibrium temperature doesn't change, the TCR is reduced by a factor of a few, and the rate at which we warm is slower, but not by as much as you might think (again, a factor of a few, rather than a factor of 10).

Of course, given that the oceans do not absorb 500 times as much of the energy as the rest of the system (more like 50), this isn't a particularly realistic assumption.

Jul 10, 2014 at 10:50 AM | Unregistered CommenterAnd Then There's Physics

Doug,


Rather, I supplied the data and they drew the figure. That is standard procedure for WSJ. The procedure implies that figures are almost certain to be fair representations of the data and that the data is available to anyone who later wants it. No peer-reviewed journal has such high quality control, AFAIK.

Seriously, you think this is better? Give raw data to someone - who is not involved in the research - who then produces figures for you? You think this would be some kind of improvement and is more likely to produce figures that are a more fair representation of the data? I know you think I'm a troll, but it's really hard not to behave like one when you say things that are this ridiculous. Seriously, think about what you've just said.

I'll even give you some clues. The hard part about producing figures is not actually plotting the data points on the graph, it's doing all the analysis that allows you to determine what the datapoint values should be in the first place. Once that's been done, it's trivial to actually put them on a graph and the only benefit of sending it to someone who works for a newspaper or a journal is that it might look prettier. Alternatively, if you're suggesting that someone else should do the actual analysis but then shouldn't be on the paper, that would both seem rather unethical and - one might argue - misrepresent who did the work. The author list is meant to represent those involved. If you leave certain people off, then it's not a fair representation of those involved.

Jul 10, 2014 at 12:10 PM | Unregistered CommenterAnd Then There's Physics

ATTP 10:50 am

Thank you for answering on EM's behalf. I'm sorry to say I can't make much sense of your reply, whereas I can usually understand what EM says, even though I usually disagree with it.

"Of course, given that the oceans do not absorb 500 times as much of the energy as the rest of the system (more like 50), this isn't a particularly realistic assumption."

I'm surprised to hear that.

I thought that was all competely up in the air as to what what is happening to "the missing heat". I had not heard that it was a 50:1 sharing.

But I don't see why you should not retain your 50:1 sharing (if that's what you beleive it is) while also treating the thermal capacity of the ocean as infinite ie so its temperature will not change significantly whatever happens.

(Unless you specially wish to , don't spend the time elucidating further - you should probably regard me as a hopeless case.)

Jul 10, 2014 at 1:11 PM | Registered CommenterMartin A

Martin,


I thought that was all competely up in the air as to what what is happening to "the missing heat". I had not heard that it was a 50:1 sharing.

Well, if you consider the ocean heat content data, the amount of energy associated with warming the land and atmosphere, and the amount associated with melting ice, then most goes into the oceans. In fact, 50 times is too high as it's more like 93% goes into the oceans, with the rest going into the other components of the climate system. The point is, though, that it's nowhere near as high as you would expect if you simply compared the heat capacities of the different systems. The reason for this is that upper parts of the ocean reach equilibrium with the rest of the system very quickly. Therefore you can treat the upper ocean and the land/atmosphere as a single system if you wished. It, however, takes much longer to reach an equilibrium with the deeper ocean. So, for short timescales (years/decades) the deeper ocean can be largely ignored. If, however, you wanted to know more about the long-term equilibrium of the system, then you'd need to also include the flow of energy into the deep ocean.


But I don't see why you should not retain your 50:1 sharing (if that's what you beleive it is) while also treating the thermal capacity of the ocean as infinite ie so its temperature will not change significantly whatever happens.

Well, because that doesn't really make sense. If the oceans absorb 50 times as much as the rest, then that would suggest the about 98% goes into the oceans and 2% goes into the rest of the system (it's more like 93% and 7% but that's not really the point). If you consider what this 2% will do, it will heat the surface and temperatures will rise. They can't really do anything else. You can't add energy without increasing temperatures.


(Unless you specially wish to , don't spend the time elucidating further - you should probably regard me as a hopeless case.)

I have no issue with elucidating further, even if it isn't accepted.

Jul 10, 2014 at 1:25 PM | Unregistered CommenterAnd Then There's Physics

Martin A

A quick back-of-the-envelope calculation.

The volume of the oceans is 1.35*10^9 cubic kilometres.
Specific heat capacity is 2.1*10^3J/kg/C.
Expansion coefficient is 1.39*10^-3/C.
Imbalance 10^22J/yr.

For the whole ocean specific heat is 2.85*10^24J/C.

Under sustained current conditions, they will warm at a rate of 1C every 300 years. Ignoring climatic effects, this will be accompanied by a sea level rise of 2.2 metres/century.

For practical purposes, not an infinite energy sink. Even at current rates of change there are consequences.

Jul 10, 2014 at 1:53 PM | Unregistered CommenterEntropic man

EM - thanks for that.

I'll look at your figures later. I'd like to compare them with a calculation I once did assuming that *all* of the energy arriving at the Earth from the Sun in one year went into the ocean and stayed there. (I was wondering how long it would take for the sea to start boiling.)

I assume that

- "Imbalance" comes from a "radiative forcing" calculation using Myhre's formula or something similar.
- There is an assumption that the heat intercepted by the upper layers of the ocean gets mixed throughout (rather than rapidly finding its way out again via evaporation of water)


2.2 m per century is roughly one inch per year. In that case, should not the sea have risen about a foot or so since the start of 'the pause' ?

Jul 10, 2014 at 5:16 PM | Registered CommenterMartin A

Martin A

The imbalance is a challenge for researchers at present. The satellite data puts it higher than the figure I used, about 6*10^22. The figure from terrestrial accounting is around 0.9*10^22. Either the satellites are overestimating, the terrestrial data is underestimating or both. I chose a fairly conservative figure.

I also assumed that almost all of the imbalance ended up in the oceans, while the actual figure is around 93%.

The current low rate of sea level rise is a weakness in my argument. There is some lag in the system. The best match between theory and observation is that current conditions reflect the energy budget 30 years ago. One test of the hypothesis would be to look for accelerating sea level with time. They have accelerated
doubling during the 20th century , but not as fast as I just predicted here.

Jul 10, 2014 at 6:43 PM | Unregistered CommenterEntropic man

EM Thanks for that.

I read a paper by Hansen not that long ago (and it was not all that old) where he said that the imbalance from satellite measurements was too large to be believable, so they were dependent on modelling (or rather 'modeling') to say what it was.

As I say, I'll look at your figures (but only after I've finished mowing the lawns and then spent the day by the sea).

Jul 10, 2014 at 7:18 PM | Unregistered CommenterMartin A

Dear Douglas J. Keenan,

First, you say:

> [T]he word that you emphasize, “best”, was not written by me. (I think that I might have approved the change, but only because we had extended discussions about many of the edits and I got weary; I was apparently much more pernickety than most authors.)

You seem to recant the conclusion with the word "best". Is that the case? A "yes" or a "no" would suffice.

***

Second, you say on the blurb of your Director's cut to have "incorporated" changes from the WSJ's editors.

Where can we read the original?

***

Third, here's how your (or the WSJ's) Director's cut ends:

Until research to choose an appropriate assumption is done, no conclusion about the significance of temperature changes can be drawn.

So unless we do more research to select an appropriate assumption, we can't draw any conclusion about the significance of temperature changes. "More research needed": is that all you want to conclude?

If that's the case, this seems a tad weaker than the title of the brown box: An insupportable assumption. If you agree that it's always possible to find a statistical model that fits better the data, how you can conclude that the IPCC's (and the MET Office's) "assumption" is insupportable?

***

Fourth, recall your disagreement with MattStat's I quoted earlier, especially:

[M]ost of this is really just a retake on the op-ed piece that I published in the Wall Street Journal. The news here is that the Met Office is effectively admitting that the op-ed piece is valid —and that they tried extremely hard to avoid admitting it.

http://wmbriggs.com/blog/?p=8061#comment-94300

Recanting the conclusion with the word "best," it's unclear which conclusion the MET Office should consider valid. Is it the one from the Director's Cut, according to which "more research is needed", or is it the one about the "insupportable assumption"?

***

Fifth, reconsider the claim you disputed at MattStat's ("If we really want to know whether temperatures have increased, then just look") seems to agree with the MET Office's press release, especially here:

The study of climate variability and change is broader than the domain of statistics, most notably due to the importance of the underpinning science of the climate system.

http://metofficenews.wordpress.com/2013/05/31/a-response-on-statistical-models-and-global-temperature/

Unless you wish to dispute this, I don't see how you can claim that the MET Office's "assumption" is "insupportable". In other words, your argument seems to depend upon moving the pea under the "pure stats" thimble to or the "random physics" thimble. (Nevermind the "anti-science" thimble for the moment.)

Your argument thus seems problematic. How could you take the pea back unde the "pure stats" thimble if your argument assumes that comparing the MET Office's "assumption" with a random walk? More importantly, how would this "pure stats" argument matter if you accept that there's always a statistical model that can be a better fit to the data?

***

Sixth, how should the MET Office support the "assumption" it ever chooses? Considering that we have yet to see one single "assumption" you'd find supportable, why would we try to satisfy your demand for a supportable assumption if you can't even provide an example that would meet your own requirements? In other words, why shouldn't we accept Richard Muller's conclusion that all this is pure statistical pedantry?

***

Seventh, you still failed to acknowledge my questions regarding Richard Muller's and Doug McNeil's emails. Were they released without permission? Have you followed through with Doug's suggestions? You do seem to try hard not to respond to this.

Thank you for your responses,

w

Jul 10, 2014 at 11:21 PM | Unregistered Commenterwillard

Oh dear Willard, are you still running in circles?

"So unless we have a validated physical model, we can't justify our choice of model. It just happens that nobody has a validated physical model."

Yep. You've got it.

"It's not that any choice of model is not necessarily unjustified, it's just that nobody ever found the only thing that would justify the choice of a model, according to Douglas."

If you want to propose a physical model and show us the V&V documentation that validates it, you'd be very welcome...

"And unless Douglas finds such justification, Douglas can ask if climate studies form a science:"

Not quite. There are lots of things for which climate scientists do have validated physical models. If you want to know why hot air rises, or why it's colder at the tops of mountains than their bottoms, there are validated physical laws and equations to tell us, able to make reliable predictions.

But if they're real scientists, then they should know better than to start making such claims about stuff they *don't* have a validated model for. That's either ignorant, negligent, or dishonest.

Making the right choice, the one that best corresponds to physical reality, requires further, difficult research, and accepting conclusions based on shaky premises risks foreclosing upon such work. That would be gross negligence for a field claiming to be scientific to commit.

The word "best" here doesn't refer to "best fit", but "best validated".

Strictly, this should say "making "a" right choice, because it's possible to have several different validated models. All you have to do is show that the error bars on it are good enough for the purpose to which you propose to apply it. It doesn't have to be perfect. It doesn't have to be "the best", but it does need to have been demonstrated to be good enough.

"But wait. If "it is *always* possible to find a counter-example model that is a better fit to the data," as Nullius says, how does testing against random walks help to find that best model?"

The words "better" / "best" refers in one case to "best fit", and in the other to "best validated". And as I keep saying, the ARIMA(3,1,0) vs AR(1) comparison is only talking about "best fit" and makes no claims to be validated.

--

"You seem to recant the conclusion with the word "best". Is that the case? A "yes" or a "no" would suffice."

No.

"So unless we do more research to select an appropriate assumption, we can't draw any conclusion about the significance of temperature changes. "More research needed": is that all you want to conclude? "

More research needed, and until you've done the research needed, please stop promoting conclusions you can't support.

"If you agree that it's always possible to find a statistical model that fits better the data, how you can conclude that the IPCC's (and the MET Office's) "assumption" is insupportable?"

Because AR(1) is not validated.

"... it's unclear which conclusion the MET Office should consider valid. Is it the one from the Director's Cut, according to which "more research is needed", or is it the one about the "insupportable assumption"?"

Both.

"Unless you wish to dispute this, I don't see how you can claim that the MET Office's "assumption" is "insupportable"."

Sure. Just explain to us how the AR(1) model they used was underpinned by the science of the climate system. If you can't do that, you'll have answered your own question.

"How could you take the pea back unde the "pure stats" thimble if your argument assumes that comparing the MET Office's "assumption" with a random walk?"

This question appears to be incoherent. What are you asking?


"More importantly, how would this "pure stats" argument matter if you accept that there's always a statistical model that can be a better fit to the data?"

That *is* the "pure stats" argument.

"Sixth, how should the MET Office support the "assumption" it ever chooses?"

By validating it.

"Considering that we have yet to see one single "assumption" you'd find supportable, why would we try to satisfy your demand for a supportable assumption if you can't even provide an example that would meet your own requirements?"

Because you can't get to the conclusion you want without doing so.

Detection and attribution require a validated physical model of the statistics of the natural background climate to be able to detect deviations from it. You don't have one. So you can't (validly) do detection and attribution. You can't (validly) say the climate is changing, or what's causing it.

If you're happy with that, fine. If you want to be able to do it, then you're going to have to do the extra work.

"You do seem to try hard not to respond to this."

It's no effort at all!

On the other hand, you do seem to be trying *very* hard to turn it into some sort of "Have you stopped beating your wife?" sort of question. Nobody cares. Why would you think anybody would?

Jul 11, 2014 at 10:18 AM | Unregistered CommenterNullius in Verba

> No.

So Nullius knows that Douglas' does not recant his conclusion including the word "best". Interesting. Let's recall Douglas' response:

The quotes that you cite are from an op-ed piece that I published in the Wall Street Journal. Op-ed pieces are edited by the newspaper’s editors, and authors do not have as much control over the exact wording as authors do with, say, peer-reviewed literature. In this case, the word that you emphasize, “best”, was not written by me. [...]

The week that the op-ed piece was published, I also put a Director’s Cut on my web site, at
http://www.informath.org/media/a42.htm

That does not contain the word that you emphasize.

Why would Douglas tell me that the not if it's not to distance himself from the conclusion from his WSJ's article?

***

Let's suppose that Douglas still endorses that conclusion. Here it is again:

The improved fit does tell us that until more research is done on the best assumptions to apply to global average temperature series, the IPCC's conclusions about the significance of the temperature changes are unfounded.

If "it is *always* possible to find a counter-example model that is a better fit to the data," as Nullius says, how does Douglas' conclusion follow exactly?

***

The conclusions published in the WSJ op-ed and the one in the Director's cut are quite different. If you remove the "best" from that conclusion, the illusion of having made a "pure maths" argument falters. No fit ever "tells you" you need to do more research, for the simple reason that you can always find a better fit.

Unless Douglas argues that there's a more plausible model, something which he actually says in his op-ed, he has no case. He needs to move his pea back under the "random physics" thimble for his argument to carry any weight. Here's how Carrick would respond to someone who'd try to play the "but random walk" gambit:

If a (temperature) series appears to be a random walk, it is unless a better model is found

So science is a form of … interpretive dance now?
What utter blather.

http://rankexploits.com/musings/2011/best-data-trend-looks-statistically-significant-so-far/#comment-84615

More research might needed by Douglas and Nullius on how to construct an argument before moving their pea under the "anti-science" thimble.

Jul 11, 2014 at 3:30 PM | Unregistered Commenterwillard

"So Nullius knows that Douglas' does not recant his conclusion including the word "best"."

The conclusion was correct, the way it was worded was not. It's not quite right, but not in the way you seem to be trying to argue. And if I don't know what Doug meant by it, what makes you so sure that you do?

In my view, it should have said something like: "The improved fit does tell us that until more research is done on producing validated assumptions to apply to global average temperature series, the IPCC's conclusions about the significance of the temperature changes are unfounded." Any set of validated assumptions would do, so "best" is the wrong way to describe it. But in an op ed for the general public, editors do sometimes sacrifice precision for simplicity.

And the conclusion remains - you cannot argue that the observed temperature record is not natural background variation by showing it doesn't fit an arbitrarily selected model like AR(1), because the real temperatures do not look anything like AR(1). If you follow their method using the same procedure with a model that the observations do look like, the mismatch goes away - which shows that their logic must be flawed. To do this properly, you need some other way of picking between models besides their fit to the data, which means validated physics, and until more research is done producing such a model, the IPCC's conclusions about the significance of the temperature change will remain unfounded.

"If "it is *always* possible to find a counter-example model that is a better fit to the data," as Nullius says, how does Douglas' conclusion follow exactly?"

I've already explained that several times now. The conclusion follows from the fact itself. There is no contradiction. What is your difficulty with the argument?

"No fit ever "tells you" you need to do more research, for the simple reason that you can always find a better fit."

I've already explained this. "Best" is used in a different sense here: not of the best fit, but the best kind of model to use - i.e. one that has been validated. That there is always a better fit means you need a different sort of reason for selecting one.

"If a (temperature) series appears to be a random walk, it is unless a better model is found"

The statement isn't quite right. The global temperature series is regarded as a 'random walk' because it passes the standard statistical tests for being non-stationary (interpreted here as 'random walk'), such as the Augmented Dickey-Fuller test. Stationary series can pass the test (even though they're not) so long as you only look at a short enough time interval that the slight distinction doesn't show up. Statisticians use the non-stationary model because it gives more accurate and less misleading results. That's generally regarded as desirable in science.

But it is important to remember that it's still an approximation, even if it's a more accurate one. We know it's stationary, we're only approximating it with a non-stationary one, just as people fitting linear trends to wiggly temperature curves know they're only a short-term approximation. Linear trends are just the same.

"So science is a form of … interpretive dance now?"

I notice you sidestepped very gracefully when I asked whether you understood that the AR(1) class of models the IPCC and Met Office were using also has the random walk as a limiting case. Did you? Do you?

Jul 11, 2014 at 8:06 PM | Unregistered CommenterNullius in Verba

> In my view, it should have said something like: "The improved fit does tell us that until more research is done on producing validated assumptions to apply to global average temperature series, the IPCC's conclusions about the significance of the temperature changes are unfounded." Any set of validated assumptions would do, so "best" is the wrong way to describe it.

The improved fit does not tell anything about "producing validated assumptions". It only tells that Douglas could find a model that offers a better fit, which is a trivial thing to do since there's always another model with a better fit. Douglas' argument could be used against any model whatsoever, including those produced using "validated assumptions." One could try to self-seal the argument with "but that does not apply to validated models," just like any true Scotsman would.

There's no a priori reason to exclude from the "set of validated assumptions" models that have a less good fit than what Douglas found. Excluding this possibility would reduce Douglas' quest to absurdity. Hint: you can always find a better fit.

The only way Douglas' argument can work is to claim that his random walk model is more plausible, which leads him beyond the realms of statistical chicanery.

***

> That there is always a better fit means you need a different sort of reason for selecting one.

More than that: it shows that the "pure stats" thimble can't stand on its own. Douglas needs to argue from plausibility, the best one can do until we "produce" "validated assumptions." Those will no doubt be produced using other "validated assumptions", for it can only be "validated assumptions" all the way down.

The pea has to move from the "pure stats" thimble to the "random physics" thimble, whether Douglas like it or not.

***

> [Y]ou cannot argue that the observed temperature record is not natural background variation by showing it doesn't fit an arbitrarily selected model like AR(1), because the real temperatures do not look anything like AR(1).

The MET Office did does not need to use its trend analysis to show that the observed temperature record is not natural background variation. They knew that before their statistical analysis.

Unless I missed the "justification" section of his essays, the same arbitrariness characterizes what Douglas chose as model for his tests, the choice of tests itself.

In fact, the connection between these tests and significance has not even been established.

***

> The global temperature series is regarded as a 'random walk' because it passes the standard statistical tests for being non-stationary (interpreted here as 'random walk'), such as the Augmented Dickey-Fuller test.

Contrapositively, this indicates that testing non-stationarity has little merit on its own, considering what we know about global temperature series independently from such testing.

Why not apply the same reasoning to the chaotic nature of thermals in a saucepan of water on the stove in order to prove that it isn't getting hotter?

***

> AR(1) class of models the IPCC and Met Office were using also has the random walk as a limiting case.

We could even convene that deterministic processes form a subclass of stochastic processes. Works for games and machines. If you can model climate mechanisms that way, go for it! More random physics might be needed. Ask Dimitri for help.

Afterwards, don't come back and pretend you're only making a "pure stats" argument.

Jul 12, 2014 at 2:39 AM | Unregistered Commenterwillard

"Douglas' argument could be used against any model whatsoever, including those produced using "validated assumptions.""

No it can't. Validation trumps goodness of fit.

It's like the Ptolemaic theory of epicycles versus Newton's law of gravitation. With epicycles, you can't validate it because you can't falsify it. If the data deviates from what you expect, you can always add or adjust epicycles to match it. While a specific set of epicycles can be rejected, you can always find a better fit. But Newton's theory is simpler, with far fewer parameters to adjust, and far more limited effects from adjusting them. If you assume an inverse-square central force, you are very tightly limited in what sort of behaviours you can predict. You can test those predictions, and either confirm the reliability of prediction or reject the theory in its entirety.

So say you want to test for the possibility of an unknown planet beyond the six you know. You can calculate what you ought to see, with what error bars, if there are only the six planets, and you can calculate the possible orbits of a seventh planet and the effects of each, and what you therefore ought to see, with error bars. If the error bars overlap, you know you can't tell. (You can't detect small rocks this way, for example.) If the error bars are small enough to distinguish the outcomes, then you can test for whether the observations deviate significantly from your 6-planet prediction. Are the observations outside the error bars?

This cannot work with an epicycle theory. The Royal Astronomer present a crude 1-cycle model, compare it to the same model plus the Hand of God, and conclude the clockwork universe of epicycles has been recently disturbed. Another astronomer points out that if you use a model with four epicycles instead, the data fits even better, and the clockwork universe is restored. You couldn't detect a seventh planet or any other external influence because you don't know what epicycles would result from the six-planet or seven-planet hypotheses. There's always an epicycle model that will fit, and if you include the uncertainty over the epicycle parameters, the error bars on the predictions are always floor to ceiling.

Validation provides external constraints on a theory. It has to be shown able to make reliable predictions beyond what can be explained by its adjustable parameters. Then it doesn't matter that there are always models that fit better, because they've not been validated. The moment you adjust a model to fit new data, you have to go back to step 1 and start to validate it again - make predictions with error bars, collect new data unseen when the model was constructed, test. Only when you stop having to adjust the model and find it passing all the tests does confidence start to build up in it. Only when you've got a lot of confidence built up can deviations from it be surprising.

"The only way Douglas' argument can work is to claim that his random walk model is more plausible, which leads him beyond the realms of statistical chicanery."

You're complaining that Doug's random walk model is less plausible than the IPCC's random walk model? On what basis?

"Unless I missed the "justification" section of his essays, the same arbitrariness characterizes what Douglas chose as model for his tests, the choice of tests itself."

How many times have I been shouting this at you? YES! Doug's model is equally arbitrary, no more, no less. The only purpose in choosing ARIMA(3,1,0) is to demonstrate the ridiculousness of choosing AR(1) and thinking you'd proved anything. The only purpose is to use the same methods the Met Office uses and come to the opposite conclusion, thereby demonstrating that the method must be flawed, because you can't come to contradictory conclusions by valid methods. You telling us that Doug's method is equally flawed IS NOT NEWS. That's the whole point of it!

"We could even convene that deterministic processes form a subclass of stochastic processes."

You still haven't answered the question. Why are you dodging?

"The MET Office did does not need to use its trend analysis to show that the observed temperature record is not natural background variation. They knew that before their statistical analysis."

How?

Because this is the primary question at the heart of all this argument. HOW do they know?

And if they have such reasons, with quantified evidence and all uncertainties accounted for, then why do they not present *them*, instead of this AR(1) rubbish?

Jul 12, 2014 at 11:55 AM | Unregistered CommenterNullius in Verba

Nullius,
You actually seem to understand some of this quite well, so I am quite surprised by some of what you're saying.


How?

Because this is the primary question at the heart of all this argument. HOW do they know?


I think we all agree that the global surface temperature is about 1 degree higher today than it was in the last 1800s. Agree?

So, what could possibly do this?

1. The planetary energy imbalance in the late 1800s was positive and quite large (about 4 W/m^2).

This could explain the warming we've observed and explain the current planetary energy imbalance, The problem is that you would expect this to cause rapid warming initially, and then slower warming as you tend asymptotically towards equilibrium. This isn't consistent with what we've observed.

2. Random, internal variability has released energy from the oceans to warm the surface (aka : Bob Tisdale).

The problem with this is that the land and atmosphere have a very low heat capacity and hence any energy that drives the temperature above equilibrium would be lost in a matter of months. This is also not consistent with us having a planetary energy imbalance today. So this is also not consistent with our observed warming.

3. Random internal variability produces a change in radiative forcing.

If this were possible, then it could explain the observed warming, but - to date - there is no plausible, tested, mechanism in which internal variability can produce a change in radiative forcing. Also, if you wanted this to explain all the warming, you would then need to explain what's happened to the radiative forcings associated with our own emissions.

4. The Sun.

There is no known mechanism by which the Sun could drive the warming we've seen since 1880.

5. The increase in atmospheric CO<sub>2</sub> concentrations have lead to an increased external radiative forcing. This, together with feedbacks like water vapour, largely explain our observed surface warming.

The change in forcing due to our own emissions can explain about half of the observed warming. The known relationship between atmospheric temperature and water vapour concentration and the impact of lapse rate feedback can explain the rest. There are - of course - other influences, but you get pretty close if you just consider these influences.

So, I would argue, that if one were to ask the question "is it possible for random internal variability to increase surface temperatures by about a degree over a period of about a century?" The answer would be "we know of no way in which this is possible". If you ask the question "Could anthropogenic influences explain an increase in surface temperatures of about 1 degree over a period of a century?". The answer would be "yes". So, as Willard points out, you don't need to know that the warming is statistically significant in order to be fairly certain that if such a warming were to occur that it is extremely unlikely that it is simply a random natural fluctuation. It may be possible, but we know of no way (given the current condition of the planet) in which it is possible. We also know that the alternative (anthropogenic influences) is possible.


And if they have such reasons, with quantified evidence and all uncertainties accounted for, then why do they not present *them*, instead of this AR(1) rubbish?

They do. The AR(1) rubbish is purely a method for determining the approximate warming trend and the uncertainty in the trend. They explain this quite clearly in their response to the parliamentary question that was prompted - I think - by Doug Keenan's suggestions. You seem to be arguing against basic data analysis. Telling people what the warming trend is, does not imply that you haven't also done work into trying to explain why it is this. What do you think all the radiative forcing analysis is about?

Jul 12, 2014 at 12:42 PM | Unregistered CommenterAnd Then There's Physics

"You actually seem to understand some of this quite well, so I am quite surprised by some of what you're saying."

Thank you! That's progress. :-)

"I think we all agree that the global surface temperature is about 1 degree higher today than it was in the last 1800s. Agree?"

I think the uncertainty in the earlier part of the record is greater than most people think, as the coverage is very poor, but yes. There's evidence for that.

"1. The planetary energy imbalance in the late 1800s was positive and quite large (about 4 W/m^2)."

I suspect you're talking about the "forcing" here, which is *not* the planetary energy imbalance, and is in any case back-calculated by implication from the models. We can't directly measure it that accurately today. We certainly don't know what it was in 1800.

Forcing is the radiative imbalance at the tropopause that arises from a changed input after you make the change but prior to letting the temperatures adjust. The *actual* energy imbalance after temperatures adjust is much smaller.

"The problem is that you would expect this to cause rapid warming initially, and then slower warming as you tend asymptotically towards equilibrium. This isn't consistent with what we've observed."

That depends on the detailed history of the forcing. If it was a step change, then yes, you get an exponential decay to the new level. But what if it wasn't? We have a lot of adjustable parameters here: - the forcing in each individual year of the 20th century - which weakens the evidence a lot.

"2. Random, internal variability has released energy from the oceans to warm the surface (aka : Bob Tisdale)."

Step changes at the times of the big El Ninos is a better fit to the data, and because we're constraining the input using an external source of information (the ENSO index) rather than having a choice, the evidence is stronger. However, it's not quantified, and ENSO+trend is indistinguishable in its predictions from ENSO. The lack of quantified prediction makes this just another unvalidated alternative. Yes, it's an option, and hasn't been ruled out, but neither is there very much support.

"The problem with this is that the land and atmosphere have a very low heat capacity and hence any energy that drives the temperature above equilibrium would be lost in a matter of months."

This doesn't follow. Bob's theory is that El Nino releases a mass of warm ocean water from the depths to the surface, where it hangs about for a number of years, and it is the residue of warm near-surface water that maintains the warmth of the land and atmosphere. The rate at which energy is lost from the ocean is a more complicated affair, and depends on the timescale you're considering. The topmost surface layers gain or lose heat in a matter of months, with the summer/winter cycle. But the deeper you go the slower the processes. If the oceans were solid and followed the heat diffusion equation, the penetration depth increases with the square root of the timescale of the changes, so heat capacity would grow similarly. The decadal heat capacity is considerably bigger than the monthly one, and energy added to the former will only emerge on decadal timescales, as it returns to equilibrium. However, the oceans are more complicated that that, with convection, upwelling, convergence, deep water formation, and the whole global thermohaline circulation to consider. Not to mention biological influences, cloud feedbacks, the effect of changing windiness on surface heat transfer, and so on. We've got very little solid data, and the big jumps in measured heat content occur at the times of the big jumps in measurement coverage. It's hard to tell.

"If this were possible, then it could explain the observed warming, but - to date - there is no plausible, tested, mechanism in which internal variability can produce a change in radiative forcing."

Plausible, yes, tested, no. I pointed you to the Lorenz index cycle paper a while back - did you have a look at it?

Changes in cloudiness can have that effect. Clouds are complicated. It's not just how much cloud there is, but where it is, how thick it is, how big the droplets, at what height, how correlated clouds at different heights are, and how long they last. Clouds, as I noted earlier, tend to be driven primarily by temperature differences. They're driven by humidity, and winds. The ENSO cycle involves some major cloud feedbacks - the warm surface waters being driven west across the Pacific cools the eastern Pacific, reducing cloudiness and increasing insolation, which drives the build-up of warm water. It's a positive feedback cycle that 'charges the capacitor' until the trade winds break and the accumulated heat is suddenly released to the surface.

ENSO is just one such effect that we have only a partial understanding of. The index cycle is another. There are stratospheric polar vortex collapses. There are oscillations and currents galore.

We have history, too. There are the Dansgaard-Oeschger events and Bond interstadials, there's the Younger Dryas, there is - if you believe the ice cores - the medieval warm period and the Roman warm period and the Minoan warm period, there's the Holocene Optimum and the 'Green Sahara'. There's the archaeological record of the North American droughts lasting centuries.

*Something* must have caused them. What else, if not internal variability?

"There is no known mechanism by which the Sun could drive the warming we've seen since 1880."

There's no known mechanism by which the Sun could have driven a cold spell around the Maunder minimum, either. And yet the idea has nevertheless been taken seriously.

The most obvious potential mechanism is the Svensmark one, where solar 'weather' has an effect on radiation that affects cloud nucleation. Since we don't know how cloud nucleation works, we can't yet tell.

But it's not the only possibility. The heating of the stratosphere is particularly sensitive to the UV component of sunlight, for example, and you could get large changes in UV that didn't affect the total much.

Not that I take solar influences very seriously - the evidence for them is very weak. But they're not ruled out, either.


"5. The increase in atmospheric CO2 concentrations have lead to an increased external radiative forcing. This, together with feedbacks like water vapour, largely explain our observed surface warming."

Yes, but only if you manually set a large number of poorly understood and inaccurately measured forcings to get it to match.

The steady rise in CO2 ought to yield a steady rise in temperature, but the temperature rise is not steady. It rose rapidly from 1910 to 1940, when CO2 was still relatively low, it then stopped and even dropped from 1940 to 1980. It rose rapidly from 1980 to 2000, and then it stopped again from 2000 to 2014 and who knows how much longer.

People have come up with various epicycles to try to explain these deviations, of course. The dip from 1940-1980 was at first explained with industrial pollution and global dimming. But if that's the case, wouldn't you expect the places with the most pollution to be coldest, and those with the least pollution to be warming? And yet, most of the warming is in the polluted northern hemisphere, and not the pristine southern one. The post-Communist industrial surge of China and India took over where Europe and the USA left off, so why did the temperature start rising again? Why has it now stopped?

The latest speculation that it is suddenly all going into the deep ocean is difficult to sustain. Why was it not going into the deep ocean 1980-2000? Or 1910-1940? What changed? And if it can change to cancel the warming, why can't the same mechanism be causing it?

You see, if you start by pointing to the rise in CO2 and the rise in temperature, and say we know of nothing else that can affect temperature so it must be that, people will just point to the two periods when CO2 was rising and temperature was not. "Ah!" you say "But of course you're ignoring natural background variation! You can get short term excursions from the long term trend!" True. But then that means that the natural background can be of the same magnitude as the global warming, and last decades. I thought you said it couldn't? And how do you tell the difference? How do you tell how long the bumps and dips can last? If you simply ascribe every rise to global warming and every drop to background noise, we'll just laugh in your face and move on. If you want to quantify and set bounds on the natural variation, so that the remainder can be attributed to the unnatural, then we'll listen. But this is the problem we've been talking about here for the past week.

--

We all agree that all other things being equal, rising CO2 ought to cause some degree of warming. We all agree that the amount depends not only on the direct greenhouse effect but also a long list of feedbacks that multiply it by an unknown number. We all agree that the number is unlikely to be so small that the warming induced is not at least a significant fraction of that observed. And we all agree (I hope) that on top of this there is superimposed a large natural background 'noise' of at least comparable if not larger magnitude, that we don't fully understand. And without a quantified understanding of the bounds of this natural behaviour, we cannot separate signal from noise, and so determine by how much the warming we expect has been multiplied. We cannot even rightly say it has been detected. Our theory predicts it, but it has not yet been confirmed by observation.

Personally I think a large part of it *is*, but I can't back my opinion with quantified science.

To shortcut this necessary research, and assert that this residual is AR(1) for no better reason than that it's easy to calculate and a lot of other people have done so, is cheating. The Met Office, after being asked five times and chased through Parliament, eventually agreed.

"The AR(1) rubbish is purely a method for determining the approximate warming trend and the uncertainty in the trend."

That was Matt Briggs' point. You don't need all that statistical machinery to tell that. Just plot the raw data and *look* at the temperatures. You can *see* they've risen about a degree over the 20th century just by eye. It's a lot more convincing, and makes no assumptions.

And if this is what you're doing it for, then there's *no* uncertainty in the trend. The data is what it is. The OLS algorithm is what it is. The gradient of the line it produces can be calculated exactly, to as many decimal places as you like. There is no uncertainty about this number at all.

It's only uncertain if you're not trying to describe what the data is *doing*, but trying to estimate the magnitude of the reason *why* it's doing it. This is the mistake the first Met Office respondent made. They read the IPCC's purely descriptive trend calculation, and interpreted it (because of the confidence intervals) as saying something about the mechanism. Hence the claim of significance.

We all agree the temperature has risen. The argument is over whether it has been shown to have risen significantly. If the trend calculation is purely descriptive, then it doesn't say anything about significance.

Jul 12, 2014 at 3:53 PM | Unregistered CommenterNullius in Verba

Nullius,


I suspect you're talking about the "forcing" here, which is *not* the planetary energy imbalance,

No, I mean the planetary energy imbalance. I simply mean that if we were out of equilibrium in the mid-1800s by 4W/m^2 and if we assume there are no other changes in forcings and feedbacks since then, we could have warmed by 1 degree since then and still have an imbalance of around 0.5W/m^2. However, the profile of the warming would be different to what we've observed. I'm simply providing a scenario and then suggesting that it isn't consistent with observations.


Step changes at the times of the big El Ninos is a better fit to the data, Yes, it's an option, and hasn't been ruled out, but neither is there very much support.

Without an associated change in forcing, it really isn't an option.


Bob's theory is that El Nino releases a mass of warm ocean water from the depths to the surface, where it hangs about for a number of years, and it is the residue of warm near-surface water that maintains the warmth of the land and atmosphere. The rate at which energy is lost from the ocean is a more complicated affair, and depends on the timescale you're considering.

Nope, if the surface temperatures goes up, the surface loses more energy. If there is no change in radiative forcing, then this energy is lost and we cool back to equilibrium. Given the heat content of the land/atmosphere/ocean surface, this would happen quickly. You can't simply increase surface temperatures by about a degree over a century without some kind of change in forcing.


Plausible, yes, tested, no. I pointed you to the Lorenz index cycle paper a while back - did you have a look at it?

I'm not sure it is plausible as there is no known physical mechanism. No, I haven't but I don't remember you pointing it out earlier.


That was Matt Briggs' point. You don't need all that statistical machinery to tell that. Just plot the raw data and *look* at the temperatures. You can *see* they've risen about a degree over the 20th century just by eye. It's a lot more convincing, and makes no assumptions.

What you appear to be proposing is that we should simply say "look, it's obviously higher now than it was then". I would argue that that is rather unscientific. All that the AR(1) analysis is doing is quantifying this. Arguing against it seems to be arguing against basic data analysis and I don't understand why you would do that.


This is the mistake the first Met Office respondent made. They read the IPCC's purely descriptive trend calculation, and interpreted it (because of the confidence intervals) as saying something about the mechanism. Hence the claim of significance.

No, I think this is what you're claiming the Met Office have done. I don't think it is what they have actually done. Also, the claim of significance can be seen in two ways. Relative to there being no trend, it is clearly significant. Relative to what we know could occur via natural process only, it is also significant. Simply because there is some chance that something we don't yet know about and don't yet understand could play some role, doesn't negate this statement.

Jul 12, 2014 at 4:13 PM | Unregistered CommenterAnd Then There's Physics

"No, I mean the planetary energy imbalance."

OK. I think I see what you're getting at.

"Without an associated change in forcing, it really isn't an option."

But there is an associated change of forcing. The cloud cover in the east Pacific changes.

"Nope, if the surface temperatures goes up, the surface loses more energy."

It does, but that doesn't tell you *how much* more energy, or what the resulting change in temperature will be.

"Given the heat content of the land/atmosphere/ocean surface, this would happen quickly."

As quickly as the reverse process - if forcing increases, less heat is lost, and the oceans would equilibrate within a few months or years, yes? So how come there's all this talk of 'heat in the pipeline' a century from now because of the oceans still absorbing the heat?

The heat content isn't a single number. It depends on the time interval you're talking about.

"I'm not sure it is plausible as there is no known physical mechanism."

Lorenz models the physical mechanism in detail. How can you say there's no physical mechanism if you haven't read the paper?!

"What you appear to be proposing is that we should simply say "look, it's obviously higher now than it was then". I would argue that that is rather unscientific."

It's perfectly scientific. It's a true observation.

" All that the AR(1) analysis is doing is quantifying this."

It isn't.

Quantify the change means taking the final temperature and subtracting the initial temperature. It's a much simpler problem. It doesn't need anything complicated like AR(1) to solve it.

AR(1) is proposing that the data is generated by a particular statistical model, and calculates the probability of any given set of observations assuming that model. If the model isn't true, the numbers are meaningless. They tell you nothing at all about the likelihood of any given noise sequence because that's not how the noise sequence is generated. They're not measuring measurement errors, or interpolation errors, or sampling errors, or anything of that sort. They're not derived from the physical processes to model the weather noise. Someone's obviously heard you ought to put error bars on scientific data but didn't know how to do it properly, so they just made something up.

This isn't how one does basic data analysis. It's wrong.

Jul 12, 2014 at 6:28 PM | Unregistered CommenterNullius in Verba

Nullius,


But there is an associated change of forcing. The cloud cover in the east Pacific changes.

Well, except at the moment, the best evidence suggest clouds provide a very small feedback, rather than a forcing and is - I think - insufficient to explain the warming.


It does, but that doesn't tell you *how much* more energy, or what the resulting change in temperature will be.

What do you mean? If you know how much the surface temperature increases, then you know how much more energy it is losing per second per square meter. You can also estimate the heat content and can hence determine how long it will take to lose this extra energy. Without a change in forcing, it is fast (months, not years).


Lorenz models the physical mechanism in detail. How can you say there's no physical mechanism if you haven't read the paper?!

I'll have a look. I should maybe have added "physical mechanism that is accepted by others in the field".


The heat content isn't a single number.

Yes, it is. It's just the mass of a system times it heat capacity.


It's perfectly scientific. It's a true observation.

If you put that into a paper, I doubt it would pass peer-review.


Quantify the change means taking the final temperature and subtracting the initial temperature. It's a much simpler problem. It doesn't need anything complicated like AR(1) to solve it.

Not if you want to know the best-fit linear trend. If you simply subtracted the first and last number you'd get the wrong answer more often than not.


AR(1) is proposing that the data is generated by a particular statistical model, and calculates the probability of any given set of observations assuming that model. If the model isn't true, the numbers are meaningless. They tell you nothing at all about the likelihood of any given noise sequence because that's not how the noise sequence is generated. They're not measuring measurement errors, or interpolation errors, or sampling errors, or anything of that sort. They're not derived from the physical processes to model the weather noise. Someone's obviously heard you ought to put error bars on scientific data but didn't know how to do it properly, so they just made something up.

This isn't how one does basic data analysis. It's wrong.


I think what you've said is all wrong. It is standard practice to use statistical methods to determine the properties of some data. That is really all the AR(1) is doing. I think you should consider that the Met Office scientists know what they're doing and that we're simply discussing this on a blog. What are the chances that you've noticed something obvious that they haven't. I would argue that it's vanishingly small. Some might call it the Galileo fallacy.

Jul 12, 2014 at 6:37 PM | Unregistered CommenterAnd Then There's Physics

"What do you mean? If you know how much the surface temperature increases, then you know how much more energy it is losing per second per square meter."

How? It's not a black body in a vacuum...

"You can also estimate the heat content and can hence determine how long it will take to lose this extra energy. [...] Yes, it is. It's just the mass of a system times it heat capacity."

How do you determine the heat capacity?

"I'll have a look. I should maybe have added "physical mechanism that is accepted by others in the field"."

The ad populam fallacy again?

"If you put that into a paper, I doubt it would pass peer-review."

Why?

(Assuming you're not talking about the sort of peer-reviewer who says things like: "It won't be easy to dismiss out of hand as the math appears to be correct theoretically"!)

"Not if you want to know the best-fit linear trend."

"Best fit" in what sense? If you mean the most likely, it's wrong. If you mean the minimum absolute deviation, it's wrong. If you mean the minimum range, it's wrong. If you mean closest to the median of the distribution, it's wrong. If you mean minimum inter-quartile range, it's wrong. If the distribution is skewed, or has distant outliers, it's wrong. If you swap the variables and regress x on y, assuming the measurements are in x, you get a different trend line. If you find the minimum squared perpendicular distance to the line instead of the minimum squared vertical distance, you again get a different answer. If you weight the points according to their individual measurement errors, or their nearness to the ends of the line, you get a different answer. If you account for residual periodic/seasonal variations first (or pick different 'normals' for your anomaly calculation), you get a different answer.

There are an infinite number of interpretations of "best fit". The standard interpretation in statistics is"most likely", and if the errors are Gaussian and independent, then the probability is a product of exponentials of quadratics, and taking logarithms tells you that the "best fit" is to be found where the sum of squared errors is minimum. But this is only the case for a specific error model. Different error models have different maximum-likelihood procedures.

People know it because it's the easiest to do, and is therefore taught first in all the textbooks, and few get to the point where they teach all the *other* definitions of "best fit". But if you haven't first shown that the data matches the preconditions for the method to be valid, it's *wrong*.

" It is standard practice to use statistical methods to determine the properties of some data."

Unfortunately so. But mathematics doesn't care whether it is standard practice.

"I think you should consider that the Met Office scientists know what they're doing and that we're simply discussing this on a blog. What are the chances that you've noticed something obvious that they haven't. I would argue that it's vanishingly small. Some might call it the Galileo fallacy."

This is called argumentum ad verecundiam. I'll quote Locke's original passage in his " Essay Concerning Human Understanding" on this, since it's such a literate description of the phenomenon. :-)

The first is, to allege the opinions of men, whose parts, learning, eminency, power, or some other cause has gained a name, and settled their reputation in the common esteem with some kind of authority. When men are established in any kind of dignity, it is thought a breach of modesty for others to derogate any way from it, and question the authority of men who are in possession of it. This is apt to be censured, as carrying with it too much pride, when a man does not readily yield to the determination of approved authors, which is wont to be received with respect and submission by others: and it is looked upon as insolence, for a man to set up and adhere to his own opinion against the current stream of antiquity; or to put it in the balance against that of some learned doctor, or otherwise approved writer. Whoever backs his tenets with such authorities, thinks he ought thereby to carry the cause, and is ready to style it impudence in any one who shall stand out against them. This I think may be called argumentum ad verecundiam.

Of course, all the great scientific philosophers have said the same thing. There are famous quotes from Galileo (as you note), Bacon, the Royal Society, Popper, Einstein, Feynman... all saying the same thing. You have to wonder why, given how long it has been one of the bedrock principles of Science, they have to keep on saying it?

Jul 12, 2014 at 7:46 PM | Unregistered CommenterNullius in Verba

Nullius,


How? It's not a black body in a vacuum...

Given the greenhouse effect (which I assume you accept), an increase in temperature of dT leads to an increase of outgoing flux of 4 eps sigma T^3 dT, where eps is about 0.6. So, in the absence of a a change in radiative forcing, I can estimate how much the rate of energy loss will increase if the temperature rises by dT.


"Best fit" in what sense? If you mean the most likely, it's wrong. If you mean the minimum absolute deviation, it's wrong

If I take a set of data points, I can determine the best fit straight line through those datapoints. Pretty standard stuff. Most first year students can do this, Doesn't tell you anything about the processes involved, but does tell you something of the properties of the data.


How do you determine the heat capacity?

If I know the specific heat capacity of a material (water, ice, land, air, ...) I can determine how much energy has to be added to increase the temperature by dT.


The ad populam fallacy again?

No, just pointing out that just because someone has something that they call a theory, doesn't mean that everyone else agrees.


Why?

Because. normally if you make some claim about a particular dataset you back that up with actual analysis, not by saying "look, isn't it obvious"


This is called argumentum ad verecundiam.

That doesn't mean I'm going to be wrong.

Jul 12, 2014 at 8:08 PM | Unregistered CommenterAnd Then There's Physics

"Given the greenhouse effect (which I assume you accept), an increase in temperature of dT leads to an increase of outgoing flux of 4 eps sigma T^3 dT, where eps is about 0.6."

That's the Stefan-Boltzmann equation (or rather, its derivative), and applies at TOA. But radiation is not the only means of heat transfer from the ocean surface, nor does it account for the heat absorbed by it.

"If I take a set of data points, I can determine the best fit straight line through those datapoints. Pretty standard stuff. Most first year students can do this"

Yes, because most first year students generally skip over all that blether at the beginning of the theorem setting out the conditions under which it works, because they already *know* that the questions set in the exercises and exam are all going to start: "Given that the errors are iid Gaussian..."

It doesn't work in the real world.

"If I know the specific heat capacity of a material (water, ice, land, air, ...) I can determine how much energy has to be added to increase the temperature by dT."

Ah. I should have realised you meant specific heat capacity. I should have asked how do you determine the mass?

"Because. normally if you make some claim about a particular dataset you back that up with actual analysis, not by saying "look, isn't it obvious""

We're doing analysis. Subtract the final value from the starting value. We're not trying to show off with impressive-looking equations, just get the answer.

But there's no problem in science with stating the obvious, either. Just so long as it *is* obvious.

Jul 12, 2014 at 8:22 PM | Unregistered CommenterNullius in Verba

Nullius,


That's the Stefan-Boltzmann equation (or rather, its derivative), and applies at TOA. But radiation is not the only means of heat transfer from the ocean surface, nor does it account for the heat absorbed by it.

Indeed, but if I use 4 eps sigma T^3 dT with eps = 0.6, I get a pretty good approximation of how quickly the surface of the planet loses energy.


It doesn't work in the real world.

What does that really mean? All we're trying to do is analyse a dataset. It's only a representation of the real world. Sounds like you want something that isn't actually possible to achieve.


Ah. I should have realised you meant specific heat capacity. I should have asked how do you determine the mass?

Atmosphere is easy. Land and ocean surface not so simple, but one can make conservation approximations. Any sensible approximation will tell you that the energy associated with an increase in surface temperature above equilibrium will be lost in a matter of months, not years.


We're doing analysis. Subtract the final value from the starting value. We're not trying to show off with impressive-looking equations, just get the answer.

No, we are trying to do analysis. That's essentially the point.


But there's no problem in science with stating the obvious, either. Just so long as it *is* obvious.

What seems obvious may turn out to not be correct once you do the analysis more thoroughly. That's why you analysis data carefully and not by simply looking at it and going "that's obvious".

Jul 12, 2014 at 8:34 PM | Unregistered CommenterAnd Then There's Physics

"All we're trying to do is analyse a dataset."

What do you mean by "analyse"? Because I don't think you're using it the same way I am.

Jul 12, 2014 at 8:46 PM | Unregistered CommenterNullius in Verba

Nullius,


What do you mean by "analyse"? Because I don't think you're using it the same way I am.

This may well be the crux of the matter. When the Met Office use AR(1) to present a trend and an uncertainty, all they're doing is presenting some property of the instrumental temperature record. It really is just an attempt to quantify the rate at which the surface temperature is increasing. It doesn't tell us why. It isn't intended to imply that it is simply linear. It is simply a basic analysis that presents some quantity of the dataset that is reasonably easy to understand and tells us something of how we've warmed.

So, when I use the term "analyse a dataset" all I mean is to determine some properties of a dataset. If one wants to then understand what that data tells you about the system you're observing, I would refer to that as "interpreting the data". It is this that requires some kind of physical model. Statistical techniques can be used to "analyse a dataset". Physical models are needed to "interpret the data".

Jul 12, 2014 at 8:56 PM | Unregistered CommenterAnd Then There's Physics

OK. So why isn't the result of subtracting the first element from the last "some properties of a dataset" that is "is just an attempt to quantify the rate at which the surface temperature is increasing"?

Jul 12, 2014 at 8:59 PM | Unregistered CommenterNullius in Verba

Nullius,


So why isn't the result of subtracting the first element from the last "some properties of a dataset" that is "is just an attempt to quantify the rate at which the surface temperature is increasing"?

Ohh, indeed it is a property of a dataset and could well be a reasonable approximation of how we've warmed. Don't get me wrong, I'm not suggesting that it's wrong to do something this simple. It's just hard to see how this is superior to doing a more detailed analysis that determines a trend and an uncertainty.

Jul 12, 2014 at 9:05 PM | Unregistered CommenterAnd Then There's Physics

"Superior" in what sense?

And "uncertainty" in what sense? What do you think the "uncertainty" of a calculated trend actually means?

Jul 12, 2014 at 9:11 PM | Unregistered CommenterNullius in Verba

Nullius,


What do you think the "uncertainty" of a calculated trend actually means?

I think it means exactly what it appears to mean. Given this dataset, what is the likely range of trends. It almost sounds like you're arguing that it would be better to be ignorant than to actually do any analysis. That we should be absolutely certain that the analysis is perfect before we attempt to do anything or to present any results. I think most (if not all) practicing scientists would disagree.

Jul 12, 2014 at 9:21 PM | Unregistered CommenterAnd Then There's Physics

"I think it means exactly what it appears to mean. Given this dataset, what is the likely range of trends."

Right, andf that's the problem, because it's not. "Likely" implies you're calculating the probability of something - in this case the probability of a parameter being in a given range given observed data and an assumed error model. But if the error model is wrong, then the calculation of what is "likely" will be wrong, too.

If you was just using it as some sort of rough measure of the straightness or scatter of the points, then that might be arguable. It's a poor choice compared to some others, but it's not actually wrong. But as soon as you start asking about what is "likely", you have to do the calculation correctly.

"It almost sounds like you're arguing that it would be better to be ignorant than to actually do any analysis."

No, I'm saying you have to use the appropriate tools for the job. Once you get beyond the first-year statistics and ordinary-least-squares, the textbooks talk about more sophisticated techniques for more complicated behaviours. The book Time Series Analysis by Box, Jenkins, and Reinsel (for example) discusses model identification in chapter 6. It's complicated, and I can't give a full recipe here, but basically you look for and remove obvious periodic (seasonal) components, you test for stationarity with tests like Augmented Dickey-Fuller and take differences until it passes, then you use properties of the autocorrelation and partial autocorrelation functions to identify the stationary part. (Or you can do it by fitting Pade approximants to the Fourier spectrum, or test the fit to various models with the Akaike information criterion, or other more complicated methods). Finally, having selected a class of model, you fit the parameters using maximum likelihood methods for the distribution in question. Then afterwards you have to go through a whole lot of diagnostics tests on the fit and the residuals to make sure you haven't been misled, or mispecified the model.

Doing this gives a far more precise and accurate description of the properties of the data. It can tell you about the trend if there is one, about the possibility of seeing spurious trends, about the autocorrelation structure, about the frequency spectrum, and more. Seen as a description of the data, this is a lot better, and a lot less likely to mislead. But while it can often give some useful hints as to the underlying mechanisms, it's only an approximation and it can't tell you for sure how the physics works.

That's what Doug did to generate his ARIMA(3,1,0) model. It's as much a textbook standard as OLS trends, and is actually recommended for this sort of data, while OLS is not.

You have to be careful not to deduce too much from it - in particular, you can't tell from it if observed data is 'abnormal' / 'significant' or not. But as a description of the behaviour of the data, following the textbook procedure for autocorrelated data would be unexceptionable.

But given the complexity of the ARIMA description - for the layman I'd use only the most simple statistics, warn them that such data can be misleading, and not try to make any claims about what's "likely". I'd not use linear trends, as they're almost certain to mislead. I might use something like LOESS smoothing, with caution and lots of warnings. But the best method is always to plot out the raw data, as is. It's as complete and undistorted description as you can get.

Jul 12, 2014 at 9:57 PM | Unregistered CommenterNullius in Verba

InfoYour post has been submitted.

Your post has been submitted successfully and will appear shortly.