Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace

Discussion > Evidence, confidence and uncertainties

Thanks Philip. I don't read CA as much as BH so missed that. Not sure why Preminger wasn't used then, maybe it came too late (it takes a long time to set up GCM runs) but I'll ask about that. Your question is perfectly reasonable!

Cheers

Richard

Dec 3, 2011 at 1:30 PM | Unregistered CommenterRichard Betts

Thanks Richard, I appreciate that, you've definitely set my mind at rest. Even more heartening, perhaps, is RPJ's delicate little tease from yesterday. Don't know how many people saw that one coming, I certainly didn't! In all honesty, I've found the latest batch of emails quite disturbing, and they have certainly made me feel a lot less trusting. The story regarding Chris de Freitas is particularly hard to stomach, especially because of the involvement of Mike Hulme. I think there is a lot of work to do to mend those particular bridges.

Dec 3, 2011 at 2:49 PM | Unregistered CommenterPhilip

Chris de Freitas eh?

Why did Hans von Storch resign? Remind me.

Dec 3, 2011 at 4:23 PM | Unregistered CommenterBBD

Richard

You ask: "Why are you so concerned about the 66% confidence anyway? That's a long way from being certain. The Met Office issued the infamous Barbecue Summer forecast on the strength of 60% odds, and we all know what happened then :-)

Is your concern not so much with the conclusion but the way it gets propagated onwards with the confidence estimate lost?

It is partly that. It is also to do with the way that the IPCC overeggs. For example, the claims about the volume of peer reviewed material published in reports is untrue. The 2007 report had around one third of the reports that were not peer reviewed.Up above I posted an extract from Stephen McIntyre's review of Jones 2009 in which Osborn and Briffa 2006 were cited. McIntyre's criticism as a reviewer were rejected because he was not published. Osborn and Briffa 2006 is worthy of criticism. It contains Mann's PC1 and two of the ten series in the MWP are bristlecone/foxtails. You quoted a list of studies in support of you response to me that are not of much if any value. Among the authors are a number of people I would say with over 90% confidence, most of the people reading this post would not trust as scientists.

It is also about trust. I quote McIntyre again: "For the public, non-disclosure of adverse data, like the trick, seems like misconduct, but Pielke Junior, for example, has observed that there is little point in trying to fit non-disclosure of adverse data into academic misconduct, because the practice is widespread in the academic community - not just climate science. Academics seem unoffended by the trick.

But there's a price for not being offended, because the public expects more. If climate scientists are unoffended by the failure to disclose adverse data, unoffended by the trick and not committed to the principles of full, true plain disclosure, the public will react, as it has, by placing less reliance on pronouncements from the entire field - thus diminishing the coin of scientists who were never involved as well as those who were."

I think that is an accurate analysis of public reaction. It is my response. My distrust extends to the Royal Society for its role in the useless inquiries and Nurse in Horizon: the government for its inability or unwillingness to look more closely at CRU and the Met Office and to the BBC for the ignorance, activism and bias it displays. I do not trust any paleo temp reconstruction that used bristlecones for data as these are well known not to be reliable proxies. I question the honesty of those who do use them and they are widely used in paleoclimatology.

With hindsight it can be seen that the Met Office forecast of a"barbecue summer" was,as they say, a binary call. The science being unhelpful one could have a 50% chance of a correct forecast.

I could re-write the IPCC conclusion of 2007 we have been discussing. "Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations." This might be written: " 50.1% of the observed increase in in global average temperatures is due to the observed increase in anthropogenic green house gas concntrations. We think this is 90% likely" This does not distort the original conclusion. It is a binary play. The statement is very close to saying that man is not responsible for the mid-20th century global warming. The IPCC conclusion does not sit easily with the claim of 90 % confidence. If the science is, as you say, robust, sound it should not produce a conclusion that allows one properly to conclude that it is 50/50 whether man is causing recent warming. Of course, one is not sure which definition of "most" applies, if there is more than one. If there is more than one definition it serves only to make less clear what the conclusion means, if anything.

Dec 3, 2011 at 5:22 PM | Unregistered Commentersam

BBD

I have not gone back to your post to check. I think you may have suggested that the BEST report said we were still warming. The BEST report only looked at land data and Muller made it clear that this meant nothing could be said about global warming until SSt were examined. He did venture that there was a "pause". I assume he meant in the land data.

Dec 3, 2011 at 5:26 PM | Unregistered Commentersam

Why did Hans von Storch resign? Remind me.
Which reason would you like, BBD. The one he gave or the real one?
Don't be so bloody naive, man.
And why not go and find out just what The Team did do about Soon and Baliunas and have your eyes opened.

Dec 3, 2011 at 6:35 PM | Unregistered CommenterMike Jackson

Hans von Storch resigned of his own free will on grounds of principal, because "as newly appointed Editor-in-Chief [he] wanted to make public that the publication of the Soon & Baliunas article was an error, and that the review process at Climate Research would be changed in order to avoid similar failures". The emails on the other hand describe how their authors, irritated over the publication, planned to pressurize von Storch over the issue, "I’ll be seeing Hans von Storch next week and I’ll be telling him in person what a disservice he’s doing to the science and the status of Climate Research. I’ve already told Hans I want nothing more to do with the journal." They further describe quite clearly how the email authors attempted to have de Freitas removed not only as an editor of the journal, but also from his university post. All this, despite de Freitas having, according to the journal's editor, performed a "good and correct job as editor".

Dec 3, 2011 at 6:56 PM | Unregistered CommenterPhilip

Hi Richard,
Let me formulate my statement once more, in the IPCC's own words:

[1] We have a hypothesized understanding of climate subcomponents work and interact with each other. The global temperature anomaly is purportedly one composite indicator of how these subcomponents all 'add up'. One can imagine several others as well, but the GAT is useful for our discussion. We aim to represent such understanding in computer-run mathematical models. To this end, the IPCC says: (emphasis in all passages mine)

A climate model is a very complex system, with many components. The model must of course be tested at the system level, that is, by running the full model and comparing the results with observations. Such tests can reveal problems, but their source is often hidden by the model’s complexity. For this reason, it is also important to test the model at the component level, that is, by isolating particular components and testing them independent of the complete model.

How do we come to rely on any model?

Climate models are mathematical representations of the climate system, expressed as computer codes and run on powerful computers. One source of confidence in models comes from the fact that model fundamentals are based on established physical laws, such as conservation of mass, energy and momentum, along with a wealth of observations.

A second source of confidence comes from the ability of models to simulate important aspects of the current climate.

[2] While broadly true, the above passage in the IPCC is slightly mistaken. We assume models as being based on 'established physical laws', by virtue of the fact that they successfully simulate important aspects of weather and climate. The two aspects are inter-linked, and in fact, are not independent of each other.

For instance, if you built a model that incorporates your understanding of the physics and interaction between the different climate system subcomponents, but it produced composite variables (like continent-scale precipitation trends and global temperature anomalies) that is completely unlike the observed variables, would you accept your understanding to be true?

[3] Given the above, how does the IPCC conclude that the models it evaluate have passed the above two tests?

Models can also simulate many observed aspects of climate change over the instrumental record. One example is that the global temperature trend over the past century (shown in Figure 1) ...

[4] Thus, a model is considered 'successful' at the point it emulates a well-resolved global temperature trend. But from [2] above, it is evident that even such a 'success' does not validate the physics of the subcomponents of the system. Rather it can only be considered a sign for preliminarily accepting it as a whole. Moreover, climate models are not pure abstractions of physical interactions based on first principles alone, they have to be 'tuned' to match reality. The IPCC itself says:

Computational constraints restrict the resolution that is possible in the discretized equations, and some representation of the large-scale impacts of unresolved processes is required (the parametrization problem).

[5] Thus, from [3] and [4] above, model outputs are accepted as valid only when they emulate current temperatures, and even this step cannot serve for subcomponent validation.

Now consider your proposition, namely that of model runs being unable to replicate present temperatures without the anthropogenic factors. Such reasoning cannot be considered acceptable, because these are not independently valid of their originating runs, i.e., runs which emulate the GAT.

Conclusion: It is not possible to query the originating, non-dependent run employing a dependent run of the same model, for the same variable used for validation of the non-dependent run (i.e., GAT).

The only acceptable test of climate model subcomponent function is therefore only a true prediction (which either comes true or is falsified), or a geoengineering experiment where the climate system perturbation is fully controlled and known beforehand.

Dec 3, 2011 at 8:14 PM | Unregistered CommenterShub

Correction: Last sentence of my comment at Dec 3, 2011 at 6:56 PM should have mentioned the publisher!

"All this, despite de Freitas having, according to the director of the journal's publisher, performed a "good and correct job as editor"

Dec 4, 2011 at 6:58 AM | Unregistered CommenterPhilip

Possibly Richard prefers not to comment on the emails? Nonetheless, I remain very interested to understand why Mike Hulme let himself get involved with the de Freitas case.

Dec 4, 2011 at 7:53 AM | Unregistered CommenterPhilip

Mike Jackson

Which reason would you like, BBD. The one he gave or the real one?

Oh come on. HvS is a fearsome character. Nobody tells him what to say, think or do. Get over to the Klimazwiebel and find out for yourself. Seriously.

Here's what he said in a WSJ opinion piece in 2009:

And what of the alarmists' kin, the skeptics? They say these words show that everything was a hoax—not just the historical temperature results in question, but also the warming documented by different groups using thermometer data. They conclude I must have been forced out of my position as chief editor of the journal Climate Research back in 2003 for my allegiance to science over politics. In fact, I left this post on my own, with no outside pressure, because of insufficient quality control on a bad paper—a skeptic's paper, at that.

End of story. Chris de Freitas and S & B (2003) are the problems here. Not the victims.

Dec 4, 2011 at 3:55 PM | Unregistered CommenterBBD

BBD,
It would be better if you talked only about things you knew about.

Dec 4, 2011 at 4:01 PM | Unregistered CommenterShub

So say something substantive and amaze us all.

Dec 4, 2011 at 4:12 PM | Unregistered CommenterBBD

Hans von Storch:

They conclude I must have been forced out of my position as chief editor of the journal Climate Research back in 2003 for my allegiance to science over politics. In fact, I left this post on my own, with no outside pressure, because of insufficient quality control on a bad paper—a skeptic's paper, at that.

Plain and simple.

Dec 4, 2011 at 4:21 PM | Unregistered CommenterBBD

BBD

Have you read what actually went on with Soon and Baliunas 2003? And with the pressure on Chris? Also can you just please re-read von Storch's words: "a skeptic's paper, at that." You can almost hear the sneer in his words.

There's science and there's politics. The vilification of S&B's findings and of de Freitas, as well as the way von Storch wants to represent hist actions are politics.

Dec 5, 2011 at 1:37 AM | Unregistered CommenterGixxerboy

Might I request that posters try harder to stay on topic? Thanks

Dec 5, 2011 at 8:38 AM | Unregistered Commentersam

Sam,

Originally my fault I think - apologies.

More on topic I hope, is the reference to Lovejoy and Schertzer @ Dec 2, 2011 at 11:40 AM, which didn't receive any feedback at the time. It has quite a bit to say about the relationship between 20 C temperatures and millennial temperatures, which I think is relevant to the question of uncertainties in attribution. As far as I can tell, it hasn't received much attention in the blogs, and I can see no good reason for this. Can I therefore point to this research again, especially their comment that "the instrumental and reconstruction discrepancy in fig. 9 thus remains unexplained", and ask if anyone here has any ideas or thoughts about it?

Dec 5, 2011 at 10:19 AM | Unregistered CommenterPhilip

@ sam

With hindsight it can be seen that the Met Office forecast of a"barbecue summer" was,as they say, a binary call. The science being unhelpful one could have a 50% chance of a correct forecast.

Indeed, and this trick is widely used among bank analysts.

Periodically these guys will publish something which typically says they see a gold price in 2012 of $3,000 or something. Or at least that's the headline. When you look at what they have actually said, it is more usually along the lines of "we see a 30% chance of a gold price of $3,000".

This means that whatever the outcome, prescience can be claimed by the simple expedient of either citing or forgetting about the all-important "30% chance" qualifier. If the price does hit $3,000 they can say, See? We said that would happen. And if it doesn't they can say See? We said that probably (70%) wouldn't happen.

A few years ago I did an analysis of the correlation between the oil price of the day, the oil price forecasts made by banks and the actual oil price outcome in the period for which they were forecasting.

The result was a 0% correlation between price forecast and price outcome (i.e. bank price forecasts have nil predictive value), but an 80% correlation to the price at the time the forecast was made.

What this suggests is that forecasts of the oil price are heavily influenced by the price of today, and that the price of today has no predictive value at all in respect of the price in a couple of years' time.

I surmise exactly the same is true in every social science, whether this be sociology, economics or climate science. It is often noted that H G Wells' idea of the future looks very much like an atrophied, invention-free pastiche of his own Edwardian era. It's also often been commented that nothing dates an era so well as its depiction of the future.

In the same way, projections of the future of anything will in 100 years' time be unerringly wrong and will smack of the prejudices of 2011.

A common critique of horror movies is that the protagonists have somehow never seen a horror movie. It's equally bizarre that scaremongers have never studied the history of scaremongering.

Dec 5, 2011 at 11:11 AM | Unregistered CommenterJustice4Rinka

Sam

You don't herd sheep by sticking to the path

Dec 5, 2011 at 11:30 AM | Unregistered CommenterGixxerboy

projections of the future of anything will in 100 years' time be unerringly wrong and will smack of the prejudices of 2011.
J4R — That is such an obvious statement in the light of everything we know that it shouldn't need saying. It's a shame that it does.
Evidence from our own experience and our parents' and out grandparents' shows that forecasting as a futile occupation because there are so many variables, including ones that we cannot take into account because we don't even know of their existence (Rumsfeld's 'unknown unknowns').
What we do know is that humanity is endlessly inventive and though we might not always come up with the best solution we almost always come up with one that works well enough.
There is no reason at all to assume that our children and grandchildren (and I am lucky enough to have discovered that I am to become a grandfather so I now have a dog in this fight) will not be as inventive as we are and our parents were.
And as I have said before they will not thank us for trying to second guess them now with solutions to the problems they will face then.

Dec 5, 2011 at 12:03 PM | Unregistered CommenterMike Jackson

BBD
And exactly what was the matter with Soon & Baliunas — except that The Team didn't like it?
Remember, it was a meta-study, not original research. They could hardly be blamed for the conclusions of others and carrying out the study they did and publishing the results wouild have been perfectly acceptable in any other field of science.
They produced research evidence (which you can dispute if you wish; that's your privilege as it was Mann's) that cast doubt on the Hockey Stick, the poster-child for CAGW.
Read the emails: the reaction was so far over the top that de Freitas and von Storch, not to mention Soon and Baliunas themselves must have wondered what hit them. It was the reaction of a clique that were incensed that anyone in the climate science community would dare cast such doubt on what they considered to be the foundation stone of their belief system.
If it became known that the Hockey Stick might be flawed then the whole edifice might (probably would) come crashing down. And the reaction, had they but the brains to realise it, was such that it only added to the scepticism. And of course now we know that at least in part it was driven by the knowledge — within Mann's own circle — that the Hockey Stick was indeed deeply flawed.
von Storch resigned because he simply wasn't brave enough to take the heat.
There was never anything intrinsically wrong with the paper as a paper, whatever the actual science contained therein.

Dec 5, 2011 at 1:47 PM | Unregistered CommenterMike Jackson

Richard

I am moving on to question 3 which I will re-post again below with your answer.

Question 3
(1) How accurate is the measured temperature record
(a) on land?
(b) at the sea surface?
(c) by satellite?
"Accurate" means "showing a negligible or permissible deviation from a standard".

Accuracy varies according to where you are and how far back you are looking. In terms of the recent differences relative to the baseline (1961-1990 average) the annual global average air temperature over land is given to just under +-0.2 degrees C (95% confidence), and the sea surface temperature to about +-0.1 degrees C.

For recent decades, the difference in global average temperature for each year compared to the baseline (1961-1990) is given to within about 0.1 degree Celcius.
See and papers by Brohan et al, 2006, J. Geophys. Res, 111, D12106, doi:10.1029/2005JD006548 and Rayner et al, 2003 for further information.

Accuracy tends to decrease further back in time due to sparser data and difficulties with re-calibration on changes of instrument, siting of weather stations, etc

(I'm not sure of the numbers regarding satellite data at the moment – I’ll try to get back to you on that.)

I looked at the Brohan paper you cited. Some of the material is over my head. There are comments I can make about it which I hope you, Richard, and readers will find interesting.


Donald Rumsfeld is cited in the references. I quote briefly from a short climateaudit piece on this interesting citation.

"Brohan et al, of which Tett is a coauthor, used the prominent statistician, Donald Rumsfeld, as an authority for their uncertainty model. Brohan et al:

A definitive assessment of uncertainties is impossible, because it is always possible that some unknown error has contaminated the data, and no quantitative allowance can be made for such unknowns. There are, however, several known limitations in the data, and estimates of the likely effects of these limitations can be made [Rumsfeld, 2004].

Rumsfeld 2004 is that speech given by Rumsfeld mentioning "known unknowns" and "unknown unknowns". What is the relevance to Brohan, I wonder?

I also found a criticism of Brohan by Pat Frank posted on "the Air Vent" under the title: "What evidence for unprecedented warming. I will post below a short extract from the piece. There is a criticism of Frank in turn at "The Blackboard". These are over my head but may be of interest to you and others.

"The discussion at CA led me to read Brohan, 2006, where I noticed that they had described measurement noise as strictly random and didn’t mention systematic error at all. That seemed doubly peculiar, and that led to the analysis I’m presenting here.

Reading the air temperature literature, it became clear that this double peculiarity typified the approach to error right back through the 1980’s and before.

What I found was that Folland et al, had made a guesstimate back in 2001 [2] that the average measurement error was (+/-)0.2 C. This (+/-)0.2 C was applied by B06 and treated as random and uncorrelated among surface stations. So, following the statistics of random errors, B06 decremented the (+/-)0.2 C as 1/(sqrtN), where N = the number of temperature measurements, and as N got large the error rapidly went to zero. And that was the whole B06 ball of wax for measurement error.

To make the long story short, assessment of error methodology showed that guessing an average error is an explicit admission that you have no real physical knowledge of it. Random error is “stationary,” meaning it is defined as having a constant average magnitude and a mean (average) of zero. When one has to make a guesstimate, one doesn’t really know the magnitude, and doesn’t really know whether the error is stationary.

In short, if one doesn’t know the error is random, then applying the statistics of random error is a mistake. "

The CA reference by Frank is to a single article, part of a series by McIntyre at climateaudit (CA). This is a fascinating series of posts and it is well worth reading the comments section as well. I hope you and others here will read the post.

Let me start with a list of some of the aspects of measuring sea surface temperatures using buckets that I think may affect the temperature recorded.

(a) the type of bucket used - wooden, canvas, steel
(b) whether the bucket was insulated or uninsulated
(c)at what depth in the sea did the bucket draw water
(d) at what depth in the bucket was the thermometer placed
(e) at what time of day was the temperature measurement made
(f) how long did the water stay in the bucket before the temperature was taken
(g) was it sunny/overcast
(h) at which side of the ship was the measurement taken.
(i) whether the measurement was taken at all. In some sea conditions I guess that if you threw a bucket of water into the sea, with the boat at some speed, and held on to the rope - you would follow the bucket.

Richard, are there records that show anything about the type of bucket in use for the measurements and any changes that occurred in the type of bucket used?

McIntyre had long suspected that Folland and Parker 1995, cited in Brohan 2006, was simply implausible in claiming that in World War II there had been a general shift away from the use of buckets to using inlet water in bilges to measure SSTs and that this switch was permanent. Folland claimed "there was a sudden but undocumented change".

The publication of Kent 2007 spurred McIntyre to write his pieces because Kent attempted to identify the distributiion of measurement methods. On the basis of Kent, McIntyre concluded that there had not been an abrupt transition to the use of Inlet water as Folland had supposed, with little supporting evidence, but the use of buckets was widespread and persisted into the early 1970s.

The adjustment made by Folland of 0.3C had to be unwound over time. Also, there was another transition that had to be taken into account. The switch, over time after the end of WW II, from the use of uninsulated buckets to insulated buckets.

According to McIntyre's sensitivity calculations, making such amendments to the measured record would mean an increased temperature in the (cooler than now) 50s and 60s with much less of a trend after the 70s. One consequence was that modellers would have to look again at their models. McIntyre's position was supported by the publication of Thompson 2008. Following which in 2011 came a new version of Had SST, SST3, which attempted to iron out and unwind some of Folland's arbitrariness, while adopting its own quirky decisions. For example: "The new HadSST3 dataset still contains some seemingly arbitrary assumptions. They assert that 30% of the ships shown in existing metadata as measuring SST by buckets actually used engine inlet and proceed to reallocate the measurements on this assumption:

It is likely that many ships that are listed as using buckets actually used the ERI method (see end Section 3.2). To correct the uncertainty arising from this, 30+-10% of bucket observations were reassigned as ERI observations. For example a grid box with 100% bucket observations was reassigned to have, say, 70% bucket and 30% ERI.

The supposedly supporting argument at the end of Section 3.2 is as follows:

It is probable that some observations recorded as being from buckets were made by the ERI method. The Norwegian contribution to WMO Tech note 2 (Amot [1954]) states that the ERI method was preferred owing to the dangers involved in deploying a bucket. This is consistent with the rst issue of WMO Pub 47 (1955), in which 80% of Norwegian ships were using ERI measurements. US Weather Bureau instructions (Bureau [1938]) state that the \condenserintake method is the simpler and shorter means of obtaining the water temperature” and that some observers took ERI measurements \if the severity of the weather [was] such as to exclude the possibility of making a bucket observation”. The only quantitative reference to the practice is in the 1956 UK Handbook of Meteorological Instruments HMSO [1956] which states that ships that travel faster than 15 knots should use the ERI method in preference to the bucket method for safety reasons. Approximately 30% of ships travelled at this speed between 1940 and 1970.

This adjustment would reduce the difference between HadSST2 and HadSST3, though the size of the impact was not reported in Kennedy et al 2011. I think that it is reasonable to hope for more conclusive documentary support for overwriting actual data particularly given that the changes described in Kennedy et al 2011 arise from unwinding previous adjustments made without documentary support,

Another somewhat quirky methodology of Kennedy et al 2011 is reported as follows:

Some observations could not be associated with a measurement method. These were randomly assigned to be either bucket or ERI measurements. The relative fractions were derived from a randomly-generated AR(1) time series as above but with range 0 to 1 and applied globally.

I have no idea at present why one would do things this way or what its effect is. It seems like an odd methodology."

Here are some more extracts from the McIntyre series to put flesh on my bare bones of the story. I start with the opening paragraphs from "The Team and Pearl Harbor"

"One of the Team’s more adventurous assumptions in creating temperature histories is that there was an abrupt and universal change in SST measurement methods away from buckets to engine inlets in 1941, coinciding with the U.S. entry into World War II. As a result, Folland et al introduced an abrupt adjustment of 0.3 deg C to all SST measurements prior to 1941 (with the amount of the adjustment attenuated in the 19th century because of a hypothesized use of wooden rather than canvas buckets.) At the time, James Hansen characterized these various adjustments as “ad hoc” and of “dubious validity” although his caveats seem to have been forgotten and the Folland adjustments have pretty much swept the field. To my knowledge, no climate scientist actually bothered trying to determine whether there was documentary evidence of this abrupt and sudden change in measurement methods. The assumption was simply asserted enough times and it came into general use.

This hypothesis has always seemed ludicrous to me ever since I became aware of it. As a result, I was very interested in the empirical study of the distribution of measurement methods illustrated in my post yesterday, showing that about 90% of SST measurements in 1970 for which the measurement method was known were still taken by buckets, despite the assumption by the Team that all measurements after 1941 were taken by engine inlet."

And this from "Lost at sea..."

"A CA reader emailed me, observing that there may be relevant differences in insulated and uninsulated buckets in the post-World War 2 period, which could easily affect adjustment schedules. This makes a lot of sense to me and might reconcile a few puzzles and opening others.

Let’s say that the delta between engine inlet temperatures and uninsulated buckets is ~0.3 deg C (and here we’re just momentarily adopting one of the canonical Folland numbers as this particular number surely deserves to be cross-examined). Insulated buckets would presumably be intermediate. Kent and Kaplan 2006 suggest a number of 0.12-0.18 deg C. So for a first rough approximation to check our bearings on this – let’s suppose that it’s halfway in between. Maybe it’s closer to engine inlets, maybe it’s closer to uninsulated buckets. We’re not trying to express viewpoints on such conundrums here – we’re merely examining what assumptions are latent in the temperature estimates.

We know that 90% of all measurements in 1970 with (supposedly) known provenance were done by buckets (Kent et al 2007), while there was a turnover in proportion to about 90% engine inlet and hull sensor by the 2000s. In my first cut at estimating the effect of unwinding some of the erroneous adjustment assumptions, I posited that the above information implied that the 0.3 deg C adjustment between buckets and engine inlets didn’t disappear merely because of reversion to “business as usual” after WW2. On this information, the only time that the delta could be introduced was between 1970 and 2000. This in turn poses new conundrums, as you’re getting into periods with satellite measurements. So there are issues with pushing the delta entirely into the post-1970 period.

However, let’s suppose that there was a transition from predominantly uninsulated buckets immediately post-WW2 to predominantly insulated buckets as at 1970 or so. Then the 0.3 deg C total adjustment would be spread proportionally between the two periods – with the delta between uninsulated buckets and insulated buckets being allocated to the 1945-1970 period or so (together with other relevant instrumental changes) while the delta between insulated buckets and engine inlets would be allocated to the 1970-2005 period (again together with any other relevant instrumental drifts e.g. changing proportion of hull sensors, buoys, whatever.)"

Below I put the McIntyre posts on this subject in date order. They are all worth reading. Three more important ones in my opinion are the Team and Pearl Harbor, Lost at Sea: the Search Party and HadSST3. In my view these posts tell a lot about the thoroughness of McIntyre's research, his clarity of writing and the helpful, knowledgeable audience he attracts (like BH in this respect). McIntyre posts up how he does his work, he accepts correction gladly and cheerfully and his research is diligent and thorough. On this topic it is IPCC research that is sloppy and shoddy.

What the series of McIntyre's posts shows, convincingly in my view, is that there is really little reliable data about the nature of SST measurement. IPCC members have piled adjustment on adjustment with littlre evidence adduced in support of the change. As a result there is, I believe, much uncertainty about SSTs.

IPCC 2007 statement quoted in question 1 relies on paleo reconstructions, the measured temperature record and computer models. None seems reliable.

Some McIntyre posts on SSTs

Changing adjustments to 19th century SST June 19 2005

SST adjustment # 2 June 24 2005

Buckets and Engines March 17 2007

The Team and Pearl Harbor March 18 2007

Rasmus the Chevalier and Bucket Adjustments December 23 2007

Nature "discovers" another Climate Audit Finding May 28 2008

Lost at sea: the search party May 31 2008

Indiana Jones and the Hollerith Punchcard June 6 2008

Had SST 3 12 July 2011

Dec 5, 2011 at 3:37 PM | Unregistered Commentersam

sam

Might I request that posters try harder to stay on topic? Thanks

Might I request that:

- you reduce the length of your comments by ~ 75%

- you reproduce ~ 75% less material from CA (which some of us do read)

- you only post on topics you understand

Thanks

Dec 5, 2011 at 8:01 PM | Unregistered CommenterBBD

Dec 5, 2011 at 3:37 PM | sam

Thanks for a well thought through and clearly presented post.

This thread is getting quite interesting and informative.

Shame to see you've been trolled by the Village Idiot again!

Dec 6, 2011 at 6:30 AM | Unregistered CommenterRKS

RKS

My interpretation of your post is that you're aiming that comment at BBD. If so, I think it is uncalled for. BBD might be many things but he is certainly no idiot. Nor could he be construed as a troll. His faith in certain estimates of the energy budget and RF might seem stubborn but try engaging him with some decent argument and civility would you, please?

I expressed surprise at some of BBD's outbursts a while ago, which he put down to personal attacks. He may have a point.

(If my interpretation is wrong, please accept my apologies.)

Dec 6, 2011 at 9:08 AM | Unregistered CommenterGixxerboy