Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Lindzen's response to Hoskins et al | Main | Heat exchange »
Thursday
Apr122012

Cool exchange

A couple of weeks ago Tamsin Edwards discussed what I think might be a better way forward for those who are interested in understanding the climate debate.

I think a large part of the audience who visit this blog (thank you) contradict these findings. Your trust in the science increases the more I talk about uncertainty! And I think you place greater importance in “calculative” rather than “relational” trust. In other words, you use the past behaviour of the scientist as a measure of trust, not similarity in values. I’ve found that whenever I talk about limitations of modelling, or challenge statements about climate science and impacts that I believe are not robust, my “trust points” go up because it demonstrates transparency and honesty. (See previous post for squandering of some of those points…). Using a warm, polite tone helps a lot, which supports Hebba’s findings. But I would wager that the degree of similarity to my audience is much less important than my ability to demonstrate trustworthiness.

PrintView Printer Friendly Version

Reader Comments (166)

Tamsin/Richard
I know you guys are busy to be spending time on this stuff. This is not the first time my point has been misunderstood though - a hazard of blog science.

The above is a example of how the IPCC has pulled the wool over our eyes, including experienced scientists.

Let us say you have a physical model at hand, and, 10 total different outcomes are possible. Let us say that the model predicts that 5 (out of 10 total) discrete outcomes as being likely. The model has given itself a chance of being right - 50% of the time. It doesn't matter if it is physically based

In other words, though a model is constructed physically, if it takes part in a predictive exercise that produces ~50% of possible outcomes, it is not functioning as a physical model.

Let us say company X builds a coin-flip prediction machine. As we know, the coin flip is traditionally an example for a randomness exercise. But it is also true, that the result of each flip, after all, is subject to nothing but deterministic forces. The machine monitors the force/direction/fumble at the the moment of each flip and estimates taking into account air turbulence, and wind, and other factors, the result of each flip.

Let us say our machine issues a prediction as follows: At least 10 results out of the next total 100 flips will be heads (95% confidence).

What would you think of that?

My thoughts would be: "Big deal!! Are you joking? Even I without your fancy machine could give that prediction!

In other words a prediction/deterministic exercise is only worth what it says is not going to happen, and that is proof for its functioning as a physical model. Each of the models the IPCC uses in its ensemble may purportedly be quasi-deterministic physical models, but the above is an example of how the IPCC utilizes them in a predictive exercise that is in essence, non-deterministic.

What are the odds of falsification of the IPCC's multimodel diagram? Almost none, I would say. It has covered all its bases. A determinstic prediction system (i.e., one which encapsulates our supposedly 100% understanding of the climate) would give predictions that has extremely tight error bands, and yet be right. Not take up half the graph and say "yeah, we did say so".

Please do note, I am not saying anything new here. The IAC essentially said the same thing to the IPCC. I can quote the relevant passages if needed.

Apr 19, 2012 at 12:38 PM | Unregistered CommenterShub

Hey, I was going to do the 50% coin toss as part of my 'you could do it better by saying today is going to be like yesterday' theme. I must fall back on to a question which may have been answered before, how are the predictions benchmarked? Short-term, that is. Does the met do it, or an independent body?

Second question, in regard to the CO2, we are told the day's CO2 level is in the short-term model. Presumably as an input variable. What formula is used to translate that into heat? By which I mean, are we summing forcings? In watts presumably. Do we map the trenberth diagram, i.e. each unit of area has an effective insolation, back radiation, evaporative and convection elements of heat transfer? And positive feedback, is that a parameter, or a result? Or am I on the wrong track completely? The question would be are those presumptive figures ever checked against measurements?

Apr 19, 2012 at 2:22 PM | Unregistered CommenterRhoda

Richard, Tamsin,

I have looked at Hawkins' "seminal paper" that you referenced, he concludes: "...our analysis suggests, that for decadal timescales and regional spacial scales (~2000 km), model uncertainty is of greater importance than internal variability". This seems to directly contracdict Tamsin's statement:

"the effects of El Nino and La Nina etc on annual temperatures are so big they can alter the trend of 10-15 years so you have to look at multi-decadal trends to be sure"

I am very, very confused.

I also note that Hawkins' paper opens with the words "Predictions of regional climate change for the next few decades are characterized by high uncertainty, but this uncertainty is potentially reducable through investment in climate science".

This, to me, sounds like like the budget request from the medieval civil service Department of Alchemy: "Give us lots of money and we will turn base metals into gold".

I am sorry to use sarcasm, but I do so to make a very important point. No offence is intended.

Apr 19, 2012 at 3:09 PM | Unregistered CommenterRoger Longstaff

The first thing I noted about Hawkins' paper was that regional "quantitative predictions" are "now available" and then cites a 2007 reference. I hope any policy-makers reading this blog for guidance will take note. They may later get shown a 'predictive' graph that shows how good the model was all the way up to 2012, but starting well before 2007.

They should stop and ask when, precisely, all predictions were made. And what, precisely, they predicted. With uncertainties and error bars. And then what subsequent data has shown. Precisely. With uncertainties and error bars.

Lastly, the difficult question: Might one have done just as well by guessing, or taking the approach that yesterday's data/trend is the best guide to today's data/trend?

Repeat ad nauseum. It is not cynical to do so. It is science.

Apr 19, 2012 at 4:22 PM | Unregistered Commentermichael hart

Roger - sorry for the confusion, I was talking about distinguishing trends in observations of the past 10-15 years.

T

Apr 19, 2012 at 5:13 PM | Unregistered CommenterTamsin Edwards

Roger, looks to me that Figure 3 in the Hawkins paper is the one you want. Your quote relates to regional trends, not global trends. Figure 3 partitions the uncertainty for global trends. The model uncertainty and internal variablity are about equal only when you get to about 12-13 years.

Translating that to real observations, one might therefore conclude that the warming trend observed over the last 15 years *could* be a result of internal variability (and sampling error I suppose) and therefore is not "statistically significant". Note that lack of statistical significance of warming does not mean that warming has not happened.

In terms of your sarcastic response to the opening sentence, in what area of research do the researches believe that new breakthroughs will come about without any investment? There, I can do sarcasm too ;) It would seem that the section entitled "THE POTENTIAL TO NARROW UNCERTAINTY" is where the justification for the opening argument is made.

Apr 19, 2012 at 5:30 PM | Unregistered Commentermiles

Thank you miles,

I do not understand your statement "the warming trend observed over the last 15 years *could* be a result of internal variability (and sampling error I suppose) and therefore is not "statistically significant". My undersanding was that the empirical data showed NO statistically significant warming over the last 15 years, regardless of whatever internal variability there may or may not have been. Is this correct?

Secondly, I think that I begin to understand your thesis - that internal variability may mask any trend data for a period of about 13 - 15 years in a numerical, time step simulation, however, thereafter trend data will be observable in the decades to follow - is that correct?

Finally (if the above is correct and if I understand RB's earier points) that in simulation results for times following the first 15 years the models only try to identify trend data in terms if GHG concentrations, regardless of all of the other (dozens of) variables?

There is obviously a flaw in my undersatnding because such an analysis defies logic.

I really would like to see the logic flow diagram for such a model. In fact, why not publish the code and documentation (it must exist) and gain the expertise of the online community - and free QA?

Apr 19, 2012 at 11:33 PM | Unregistered CommenterRoger Longstaff

miles, I should have added after my final comment - "after all, we all paid for it".

Apr 19, 2012 at 11:55 PM | Unregistered CommenterRoger Longstaff

Roger, good luck with that line, they are trying to sell the product, they aren't likely to post the code. A logic flow wouldn't be a bad thing to see though. As for crowd-sourced QA, I wouldn't wish it on my worst enemy. Well, maybe I would.

Apr 20, 2012 at 8:39 AM | Unregistered CommenterRhoda

Rhoda, you may well be right, however, an open audit of GCM methodology would surely be a good thing.

I have tried to summarise my thoughts, given the discussions so far. All comments would be welcome.

GCM Hypothesis – Numerical models can provide valuable information on the future of the Earth's climate in the decades to come, particularly in relation to anthropogenic GHG emissions.

The scientific method requires us to attempt to falsify this hypothesis. Here is my attempt, separated into temporal segments, bearing in mind that it has been stated that the same models are used over all timescales:

1. Numerical models are claimed to have increased the accuracy of weather forecasts over a timescale of several days, however, it is just as likely that any improvement in recent decades is the result of modern data collection methods (satellites, weather radar, etc.).
2. Numerical models have been shown to be inaccurate over timescales of a few months. Evidence – the”barbeque summer” argument.
3. Numerical models have been shown to be inaccurate over decadal timeframes. Evidence: empirical data showing a lack of statistically significant warming over the last 10 – 15 years, contradicting model predictions made 10 – 15 years ago (reference – WUWT database). (The counter argument is that “internal variability” (El Nino, La Nina, volcanoes, etc.) mask trend data over a 10- 15 year timeframe).
4. It is claimed that, following a period of internal variability, numerical models will begin to produce valuable trend data, which can be used to analyse the effect of GHG emissions on the climate. There is no evidence to support this claim, indeed it is mathematically counter-intuitive, as as drift and errors (in input data and incorrect or incomplete algorithms) have a cumulative effect in a numerical, time step integration, thereby progressively deviating from reality.

Apr 20, 2012 at 9:01 AM | Unregistered CommenterRoger Longstaff

I do not understand your statement "the warming trend observed over the last 15 years *could* be a result of internal variability (and sampling error I suppose) and therefore is not "statistically significant". My undersanding was that the empirical data showed NO statistically significant warming over the last 15 years, regardless of whatever internal variability there may or may not have been. Is this correct?

Roger,

If you plot 1996 to 2011 temperature data on woodfortrees you get between 0.1C per decade warming and 0.2C per decade warming depending on whether you pick HadCRUT3 or GISSTemp.

But if you look at the plot, you will see there are rather a lot of ups and downs from variability. Looking at the surface temperature data in isolation from anything else it is not beyond the bounds of statistical likelihood that the trend is purely a result of a connstant trend but one within which the downs happening at the beginning of the period and the ups happening at the end.

So that is the only basis of stating that the trend is not statistically significant.

(In the context of the longer term trend (which is statistically significant) the shorter term trend is arguably more compelling because it is (roughly) a continuation of the longer term trend. Additionally when one looks at other (semi-independent) metrics such as ocean heat content, one might view that it is less likely that that trend is also due to internal variability).

I'll provide a link to the woodfortrees plot in a following post as I don't know whether I am allowed links.

Apr 20, 2012 at 10:16 AM | Unregistered Commentermiles

GISStemp 1996-2011 - hopefully

Apr 20, 2012 at 10:18 AM | Unregistered Commentermiles

www.woodfortrees.org/plot/gistemp-dts/from:1996/to:2011/plot/gistemp-dts/from:1996/to:2011/trend

Apr 20, 2012 at 10:19 AM | Unregistered Commentermiles

Thank you miles.

Apr 20, 2012 at 10:19 AM | Unregistered CommenterRoger Longstaff

Hi Richard (Betts)

I know you posted earlier that climate models in the 1970s predicted 0.2 C of warming which then came to pass.

Here is a look at the IPCC first report (FAR) predictions, as represented in the AR4, compared to actual temperatures today. You can ignore the numbers, but the graph is just an overlay

Predictions of the IPCC FAR versus HadCRUT3

Apr 22, 2012 at 1:09 PM | Unregistered CommenterShub

Shub, you appear to have picked the satellite temperature rather than the surface temperature which does show warming much closer to 0.2C per decade since late 1970s. Satellite temps are measuring different things

Apr 22, 2012 at 10:07 PM | Unregistered Commentermiles

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>