Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« A parody? | Main | Who is behind the Nazca vandals? »
Saturday
Dec272014

Schmidt and Sherwood on climate models

Over the last week or so I've been spending a bit of time with a new paper from Gavin Schmidt and Steven Sherwood. Gavin needs no introduction of course, and Sherwood is also well known to BH readers, having come to prominence when he attempted a rebuttal of the Lewis and Crok report on climate sensitivity, apparently without actually having read it.

The paper is a preprint that will eventually appear in the European Journal of the Philosophy of Science and can be downloaded here. It is a contribution to an ongoing debate in philosophy of science circles as to how computer simulations fit into the normal blueprint of science, with some claiming that they are something other than a hypothesis or an experiment.

I'm not sure whether this is a particularly productive discussion as regards the climate debate. If a computer simulation is to be policy-relevant its output must be capable of being an approximation to the real world, and must be validated to show that this is the case. If climate modellers want to make the case that their virtual worlds are neither hypothesis nor experiment, or to use them to address otherwise intractable questions, as Schmidt and Sherwood note happens, then that's fine so long as climate models remain firmly under lock and key in the ivory tower.

Unfortunately, Schmidt and Sherwood seem overconfident in GCMs:

...climate models, while imperfect, work well in many respects (that is to say, they provide useful skill over and above simpler methods for making predictions).

Following on from this, the authors examine climate model development and testing, and both sections are interesting. For example, the section on tuning models includes this:

Once put together, a climate model typically has a handful of loosely-constrained parameters that can in practice be used to calibrate a few key emergent properties of the resulting simulations. In principle there may be a large number of such parameters that could potentially be tuned if one wanted to compare a very large ensemble of simulations (e.g. Stainforth et al 2005), but this cumbersome exercise is rarely done operationally. The tuning or calibration effort seeks to minimise errors in key properties which would usually include the top-of-the-atmosphere radiative balance, mean surface temperature, and/or mean zonal wind speeds in the main atmospheric jets (Schmidt et al 2014b; Mauritsen et al 2012). In our experience however tuning parameters provide remarkably little leverage in improving overall model skill once a reasonable part of parameter space has been identified. Improvements in one field are usually accompanied by degradation in others, and the final choice of parameter involves judgments about the relative importance of different aspects of the simulations...

This tallies with what Richard Betts has said in the past, namely that modellers are using the "known unknowns" to get the model into the right climatic ballpark, but not to wiggle-match. However, I'm not sure that users of climate models can place much reliance on them when there is this clear admission that the models are nudged or fudged so that they look "reasonable".

The section on model evaluation is also interesting:

The most important measure of model skill is of course its ability to predict previously unmeasured (or unnoticed) phenomena or connections in ways that are more accurate than some simpler heuristic. Many examples exist, from straightforward predictions (ahead of time) of the likely impact of the Pinatubo eruption (Hansen et al 1992), the skillful projection of the last three decades of warming (Hansen et al 1988; Hargreaves 2010) and correctly predicting the resolution of disagreements between different sources of observation data e.g., between ocean and land temperature reconstructions in the last glacial period (Rind and Peteet 1985), or the satellite and surface temperature records in the 1990s (Mears et al 2003; Thorne et al 2011). Against this must be balanced predictions that did not match subsequent observations—for instance the underestimate of the rate of Arctic sea ice loss in CMIP3 (Stroeve et al 2007).

I was rewatching Earth: Climate Wars the other day, and laughed at the section on the credibility of climate models, which essentially argued that because Hansen got the global response to Pinatubo correct we should believe what climate models tell us about the climate at the end of the next century. Of course, we'd shout it to the roottops if Hansen's model had got it wrong, but I think some recognition is due of what a small hurdle this was.

Similarly, how much confidence should climate modellers have in Hansen's 1988 prediction? As the Hargreaves paper cited notes, Hansen's GCM overpredicted warming by some 40% as assessed in its first 20 years. This was still better than a naive prediction of no warming, but was still a long way out. Moreover, it should now be possible to redo Hargreaves' assessment at the 25-year mark and it is more than likely that the naive prediction will now outperform the GCM.

And what about the Arctic sea ice predictions? You have to laugh at the authors' shamelessness in picking Arctic sea ice here. Look, it's worse than we thought! Nevertheless, Stroeve et al 2007 proves an interesting read, with computer model simulations presented alongside observational data going back to 1950. The early figures in this dataset were apparently based on a paper from the Met Office, a read of which reveals that they were based on interpolation from other data points. The paper also contains these words of caution:

Care must be taken when using HadISST1 for studies of observed climatic variability, particularly in some data sparse regions, because of the limitations of the interpolation techniques, although it has been done successfully...

Datasparse regions like the Arctic then?

I think I'm right in saying that there has been another paper published recently which reconstructed sea ice levels from old satellite photos and showed that the Met Office figures were too high, but I can't lay my hands on it at the moment.

So, Schmidt and Sherwood is an interesting read, but I'm not sure that the poor policymaker will draw much comfort from it.

 

PrintView Printer Friendly Version

Reader Comments (54)

HAS -
Thanks for the reference. I think Smith has it about right. Certainly Schmidt & Sherwood look with Nelson's eye at the models' limitations.
But it's the decadal, and multi-decadal, forecasts of "not much" value which form the majority of the "scientific" backing for subsidies/taxes, using the models' claim that something awful will happen soon without immediate and comprehensive reduction in fossil fuel consumption.

Dec 29, 2014 at 5:53 AM | Registered CommenterHaroldW

HaroldW, I think the important point is how do we improve our ability to manage any risks in what is going on.

That conversation can only get traction once there is a common understanding of the limitations of the tools we have at hand. I suspect once that occurs we will see less money going into PhDs and research programmes that do "what-ifs?" on the output of climate models and more effort into helping to recognise the early signs that we might be heading to events that do need some effort by way of mitigation (whether as a consequence of CO2 or natural variation).

By way of example I live in a country with lots of coastal communities and considerable siesmic activity (NZ). In practice the risks from sea level rise (including the impact of storms) from possible temperature increases are much lower then "the big one" as we so lovingly call it. Sea level rise is slow, reasonable predictable in the life times of most of the built assets and is only likely to impact on a small proportion of the community. In most cases it can and will be managed by private individuals and their insurers dealing with the problem. Vulnerable structures won't get repaired or replaced. Both risk and adaption is local not global despite what you get told.

As we saw in Christchurch the siesmic risk is a much more unruly beast. In risk management terms the likelihood of a siesmic event is higher based on current knowledge and the consequences are definitely much greater.

Dec 29, 2014 at 6:42 AM | Unregistered CommenterHAS

having worked with the safety team in a car R&D center, complex models are used to verify safety critical systems (traction control,ABS etc) initially (and the people are maths genius level!) but even after all that they still take a car specially wired with a whole host of sensors to the test track and make damn sure the models come anywhere close to real world.

They make climate scientist level maths look like preschoolers but even they know not to trust their model output!.

Dec 29, 2014 at 10:32 AM | Unregistered Commentercomptroller

Slightly off-topic - but relevant to the 'predicition business..'

I wonder just how many observers in the oil industry in - say July 2014 - came out and stated: 'By the end of the year, Brent Crude will be $55/barrel'....?

Thought not....

Dec 30, 2014 at 1:13 PM | Unregistered Commentersherlock1

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>