Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace

Discussion > Where is Rhoda's Evidence? (plagiarised by Dung)

Martin

Ironically I'm a good example of the differing ways to develop software. One recent project I worked on involved flight displays and was Level A. Full traceability, using DOORS, configuration management for everything.

But I have also in my own time developed a Wordpress Theme. It doesn't have the same level of configuration beyond version control, user help and some info about functions. It's written this way because someone can hack at it if they want.

So the use of the software dictates the level of oversight. Climate models are first and foremost scientific tools with no direct use. By the power of money and politics they have been turned into very useful tools to drive policy. But the level of software control is still at academic level only.

That's a problem.

Nov 18, 2015 at 10:44 AM | Registered CommenterMicky H Corbett

That's a problem.

I'm not sure that the Met Office code quality itself is a problem at all.

There is a serious problem - but it is at a far higher level. The fundamental problem is the Met Office's delusion that it is possible to simulate the climate system and to use the simulation reliably to predict future climate.

If the Met Office's programs were used solely to improve their understanding of how the climate system works, purely a research tool, then it would not matter to anyone outside their bubble what was the quality of their code itself.

Of course, if their model had the potential of providing reliable information about future climate, and if that information were used to form national policy and contribute to international policy, then it would matter a great deal whether the software was doing what it was supposed to be doing.

However, the realistic view is that a million line climate model, incapable of being validated, and incorporating countless 'judgements', and with many other problems, is nothing more than a million line w@nk. The divergence between its output and what the climate has actually been doing has now been providing confirmation of this for some years.

The Met Office's lack of software quality is at a much higher level than the details of their code. If you are trying to program something that is beyond human ability, the output is always going to useless and it is then irrelevant to anything how crappy the code itself is.

Nov 18, 2015 at 12:33 PM | Registered CommenterMartin A

Micky H Corbett, Martin A

I have enormous respect for software engineering as a discipline, where applications such as jet engine control systems are concerned.

To my ignorant eye they are jewels, bugless and covering every conceivable contingency. They are as near perfect as you can get them, of necessity. Theyy apply to systems whose properties are stable over time. A jet engine operates within known parameters which have been measured and optimised during development. You are dealing with a machine whose properties and behaviour are known and predictable. The underlying physics is determinist.

I imagine the type of mind which produces such software as a perfectionist, working hard to eliminate all uncertainties and cover every possible contingency. The resulting software should be fault free and need no further modification.


Now consider a climate or weather forecasting model. It is ramshackle by comparison.

It deals with many more variables, some of which are known in advance, some of which are sampled from the environment and others which have to be inferred.

Each run begins from different starting conditions.

Much of the system is chaotic, sensitive to small differences in starting conditions and therefore inherently difficult to project. You have to make multiple runs and choose those which look most likely.

Finally, the science is developing. Understanding of existing parameters and processes changes wit time and new parameters become sufficiently understood to include. The code is constantly being rewritten to accommodate the changes.

Climate and weather software will inevitably be ramshackle by your standards and constantly changing because both the climate system and our understanding of it are constantly changing.

I meant no insult, but I suspect that the type of software engineer who produces perfect engineering control software would find the Hadley Centre's approach intolerable. I suspect you would hate it and I would love it.

Nov 18, 2015 at 6:15 PM | Unregistered CommenterEntropic man

I suspect you would hate it and I would love it.
Nov 18, 2015 at 6:15 PM Entropic man

Yes, you may well be right on both counts there, EM. Although the reasons I'd probably hate it might not be the same as what you imagine.

Nov 18, 2015 at 6:43 PM | Unregistered CommenterMartin A

Now consider ... I would love it.
I have never read a more cogent exposition of why weather prediction for more than a week ahead or climate predictions or even projections for any period at all should be dismissed out of hand.
Thank you, EM.
The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.”
IPCC Third Assessment Report, WG1, Executive Summary.

Going home time!

Nov 18, 2015 at 9:26 PM | Registered CommenterMike Jackson

EM

What you are talking about is development versus verification stages. If climate models are always changing and never reach verification then why are they used as verified tools with all the consequences that come with it?

I've been involved in all stages of development on programs so I know the joy of creation and the boredom of testing.

But then I also know what it means and what it takes to develop software that is used to directly affect peoples' lives.

You can't compromise on that though the Met Office apparently thinks it can. And it appears it believes itself above the ethics of that. I don't think it's intentional just inexperience with engineering fields.

Nov 19, 2015 at 7:32 AM | Registered CommenterMicky H Corbett

Entropic, your posts usually make some sense, but your explanation about why standard software engineering principles cannot be attached to the whimsical chaotic nature of climate modelling is the biggest load of bunk I have ever read.

You are implying that standard well-understood linear systems require a sort of watchmaker drone mentality, with its verification stages and calibration and testing, but the models of climate require the artisan, freethinking approach which cannot be constrained by the rules of good design, and this is why you can't apply standard engineering principles to them. You then compound this artless fib with the speculation that you would prefer that sort of 'creative coding' approach, and that Martin the robotic-voiced software engineer would hate it, because presumably he's unable to think outside the box, unlike those creative of climate like yourself who would excel at the light touch of such an art..

As a self-compliment, it's a classic, as a description of the problem space and the software engineering approaches you would use to tackle it, it's pure bunk. Climate modelling is complex, not because it is a complex problem - designing a jet fighter or a city road scheme are far more complex in terms of moving parts. Climate is hard to do because of the problems of resolution, initial conditions, and multiplying modelling dimensions. Initial conditions aside, it's a capacity problem, not a complexity problem. We don't have the computing power to model even the understood components (down to molecular size) over three spatial dimensions and a time dimension, over the multiple scenarios we require to do analysis.

This means that approximations are made - 'fudge factors' as we call them in the industry. Constants or stub functions which approximate to the real function (perhaps measured independently in a laboratory) over a suitable range. This is where it becomes guesswork, because the error introduced by these essential simplifications is unknown, multiplied over multiple dimensions, a badly chosen one can render the entire model run useless.

What you are describing as some sort of art that only certain special people can do is actually the management of introducing error into the model, anathema to any engineer, who always tried to reduce error. This is why Martin said he'd hare it but not for the reasons you think. The technique you are implying is actually the degradation of the model (which is already an unrealistic abstraction of reality) by the application of guesses. In engineering (which is applied science) it is an admission that the models don't work and you're having to guess instead.

Nov 19, 2015 at 9:35 AM | Unregistered CommenterTheBigYinJames

BYIJ - Haha ..... Yes, EM is always (well from time to time, at least) telling me how I am incapable of thinking outside precisely defined situations in which there is no ambiguity.

This means that approximations are made - 'fudge factors' as we call them in the industry. (...) This is where it becomes guesswork

Or 'parameterisations' as climate scientists prefer to call them. 'Fiddle factor' is also a commonly used term but perhaps not in climate science.

(...) The models have to be tuned empirically by adjusting the parameterization schemes, because both the models and the parameterizations are only approximations of the real physical processes. Such tuning means that improvements that are more scientifically justifiable sometimes decrease the model’s skill. The scientists sometimes talk of the models getting a good match with observations for the wrong reasons.(...) [from the Met Office software paper]

"The scientists sometimes talk of the models getting a good match with observations for the wrong reasons"

Testing on the training data again. A perfect fit to past observations but not a representation of the physical reality.

The Met Office climate model reminds me of the Macaroni Nimrod radar project of the 1980's - hundreds of people earnestly beavering away writing code etc etc to implement a doomed system that nobody understood in its entirety. And whose failure lead to many deaths.

Nov 19, 2015 at 11:12 AM | Registered CommenterMartin A

Martin A, Micky H Corbett, TheBigYinJames

I certainly touched a nerve here.

Micky H Corbett

You distinguish between the development and verification stages of software development. The Met Office weather forecasting software can be verified by use, but the verification of climate models can only be done in hindsight.

I am happy to recognise that CMIP5 and its descendant are in development, and therefore unacceptable as policy tools by engineering standards. The problem is that, in this case, policy has to be made now. Decisions regarding CO2 production and climate mitigation made now will have long term consequences. A big role of climate models is to project the consequences of different policy options.

It would be nice to wait until the 50 year projections have been compared with reality, but by then the consequences of current policy decisions would be locked in. In such circumstances you take the best information you can get, and an imperfect model is better than no model at all.

TheBigYinJames

You discuss two different aspects of the problem Micky H Corbett described.

The first is capability. To describe the Earth's climate in detail requires enormous computing power. IIRC CMIP5 used a 50 km grid. That is 204,000 grid squares, plus the surface and several pages of atmosphere in each grid. Call it 1 million blocks. A run sets starting conditions at time t. The model uses the physical model to calculate energy flow in and out of each block and updates their state to time t+1. The cycle repeats for as long as requires.

As processing power improves the grid size can be decreased and the time intervals shortened. The model remains approximate, but its resolution is improved. If you regard the threshold at which the model becomes useful as beyond current technology, may I ask when you expect to reach the threshold?

Climate is a complex, chaotic system which is not deterministic at our level of knowledge. Any projections come with confidence limits. You seem uncomfortable with that. How do software engineers handle uncertainty in other fields?
Modelling turbulent airflow over an airframe comes to mind as a possible example.

Martin A

Past climate data is used to tune the climate models. Past climate data is then used to verify its performance. This is a problem for climate modellers, because we only have data for one planet. Verification of future projections by comparison with the actual outcome would require a crystal ball.

Has this difficulty arisen elsewhere? How was the verification problem solved?

You are clearly unhappy with verification of climate models. Can you suggest ways in which you might verify 50 year projections by a climate model without waiting 50 years? To use an engineering analogy, how would you verify that the calculated fatigue life of a turbine blade, perhaps 20,000 flying hours, is correct? Is there any way short of running the engine for 20,00 hours and looking for blade cracks?

Nov 19, 2015 at 1:14 PM | Unregistered CommenterEntropic man

EM

Climate is a complex, chaotic system which is not deterministic at our level of knowledge. Any projections come with confidence limits. You seem uncomfortable with that. How do software engineers handle uncertainty in other fields?

I am not uncomfortable with confidence limits, because I understand how confidence limits are calculated. They are not fingers in the air, they are also outputs of the calculation, and are determined by both the measurement precision of the inputs the nature of the calculation steps, which are all known. Errors compound over each dimension of the calculation, including time.

With numerical analysis with auto correlated model states, the errors compound quickly. This is why weather forecasts go out of whack after only a small amount of model time. If some of the calculations steps use unmeasured errors, i.e. fudge factors, then the confidence limits placed on the outputs are not calculated, but only estimated (and very probably not included in the confidence limit calculation, since the errors are unknown and thus assumed to be zero)

I am absolutely NOT comfortable with that.

Please do not conflate uncertainty in engineering, which is a well-understood area, with guesswork in inputs and calculation parameters within climate models.

Nov 19, 2015 at 1:42 PM | Unregistered CommenterTheBigYinJames

EM

I am happy to recognise that CMIP5 and its descendant are in development, and therefore unacceptable as policy tools by engineering standards. The problem is that, in this case, policy has to be made now. Decisions regarding CO2 production and climate mitigation made now will have long term consequences. A big role of climate models is to project the consequences of different policy options.

The first part in bold is exactly the point. You don't change people's lives on hypotheticals. You then appear to contradict yourself with an exclamation of faith.

CO2 and its supposed effects on the climate only come from modelling. The basic physics does not cover it, as the IPCC themselves even state as shown by The Big Yin. So the models themselves are not sufficient or to a high enough standard to be reliable.

Therefore if you still believe CO2 is a threat it is only because you believe it to be so rather than doing your best to demonstrate so.

Like I have always said the science may in fact be right to a degree but not following the process to make it safe to use (within reason) makes it dangerous. Predicting long term effects solely on guesses and models is criminal in some industries like the one I work in. Luckily there are certification boards and independent reviewers for that.

Nov 19, 2015 at 5:56 PM | Registered CommenterMicky H Corbett

TheBigYinJames

"Please do not conflate uncertainty in engineering, which is a well-understood area, with guesswork in inputs and calculation parameters within climate models."

The key words would appear to be "well understood.area". You are normally working with what used to be science, until it became so well understood that the doubts have evaporated and the uncertainties minimised.

Amber Rudd comes to you and asks you to write a climate model as a third, independent check on the Hadley Centre and GISS. She needs it as an aid for long term infrastructure planning.

What would you tell her?

Nov 19, 2015 at 5:59 PM | Unregistered CommenterEntropic man

Micky H Corbett

". Predicting long term effects solely on guesses and models is criminal in some industries like the one I work in. Luckily there are certification boards and independent reviewers for that."

Like TheBigYinJames you have the luxury of working with mature science, completely understood. How does the system cope with an unexpected emergency, vwhen you are forced to operate outside that comfort zone, having to rely on guesses and models.

Once again, an engineering example. Air travel disruption after the 2010 Eyjafjallajökull eruption shut down most air travel over Western Europe while engineers scrambled to understand the risks. That was a minor eruption. How would your systems have coped if it had been on the scale of Tambora?

You are starting to make me nervous. Are you telling me that the software underlying an increasingly algorithm driven civilisation cannot cope with the unexpected?

Nov 19, 2015 at 6:29 PM | Unregistered CommenterEntropic man

EM:

You ask what you would do if Amber Rudd comes to you and asks you to write a climate model as a third, independent check on the Hadley Centre and GISS. She needs it as an aid for long term infrastructure planning.

I would tell her not to waste her money. If I were a climate scientist, I would ask her how much taxpayers' money she was prepared to pay me.

Incidentally the absurd hysteria over the Eyjafjallajökull eruption was another example of model failure which caused great unnecessary losses to airlines. Airline pilots have always flown around volcanic eruptions. To be fair, the eruption happened when there was a lot of pollen in the air in the UK which misled a lot of people!

Nov 19, 2015 at 6:59 PM | Unregistered CommenterMike Post

This graph was just published by Sou at Hotwhopper. The full post is here.

The graph compares Hadcrut4 with the mean of all the CMIP5 model runs. This does not look like an unsuccessful model.

Nov 19, 2015 at 7:28 PM | Unregistered CommenterEntropic man

EM

Forgive me but was not the first model output for CMIP5 supposed to be available for analysis in February 2011? Hind casting is realitvely easy. The future is not so.

Nov 19, 2015 at 8:29 PM | Unregistered CommenterMike Post

EM

Like TheBigYinJames you have the luxury of working with mature science, completely understood. How does the system cope with an unexpected emergency, vwhen you are forced to operate outside that comfort zone, having to rely on guesses and models.

Once again, an engineering example. Air travel disruption after the 2010 Eyjafjallajökull eruption shut down most air travel over Western Europe while engineers scrambled to understand the risks. That was a minor eruption. How would your systems have coped if it had been on the scale of Tambora?

You are starting to make me nervous. Are you telling me that the software underlying an increasingly algorithm driven civilisation cannot cope with the unexpected?

Dust in an engine can be tested and has been tested. Distribution of it in the air requires adequate sensors, otherwise the tolerance is quite high. This comes down with tests which can be done if the knowledge is considered cost effective. Otherwise you live with the delay.

The effects of CO2 with regards to causing dangerous heating is unknown. The cost of decarbonising the economy has some serious repercussions that can actually be predicted quite well over the next decade at least. The prudent thing to do would be to make sure all those components of your models match reality to a degree considered reasonable enough before starting to take drastic action. Otherwise you end up throwing money at shadows.

If there was a large scale eruption encompassing the globe do you think air travel would stay permanently grounded without someone checking to see if dust concentrations were actually that bad? During Eyjafjallajökull, RyanAir got very vocal about the over cautious approach taken. Test planes flew to see what the density of dust was like. And the great thing is that data will be used the next time it happens.

But what if they had no evidence, just a model based on extrapolated assumptions from a lab engine? Do you think people would say yep that's good we don't have to do anything else?

Climate models are trying to predict the entire atmospheric process and boil it down to CO2 as the main factor.

How you cope with the unexpected is how you cope with the expected. You don't suddenly start relying on Santa Claus.

Nov 19, 2015 at 8:36 PM | Unregistered CommenterMicky H Corbett

Micky

It was not just Ryanair! The whole scare was based on idiotic models egged on by the intervention of government.

Nov 19, 2015 at 8:51 PM | Unregistered CommenterMike Post

EM - congratulations on keeping so many balls in the air (or is it plates spinning on spindles?) at once.
Also, for asking what seem to be genuine questions.

...an imperfect model is better than no model at all.
Nov 19, 2015 at 1:14 PM | Unregistered CommenterEntropic man

Several times that had been stated here by people with similar beliefs to yours. But if you have no way of knowing if your model has any predictive ability at all (as with climate models), then the only honest thing is to say "Currently, we do not know".

Yet, the Met Office is happy to say stuff like:

Computer models are the only reliable
way to predict changes in climate. Their
reliability is tested by seeing if they are able
to reproduce the past climate, which gives
scientists confidence that they can also
predict the future.
. They are deluded or lying.

Being able to reproduce a very small bit of past history is not a test of ability to predict future behaviour. It provides no more than a basic sanity check.

Past climate data is used to tune the climate models. Past climate data is then used to verify its performance.

Yes. So you are testing the model's ability to reproduce the data it was trained on, not that it is a correct representation of the physical reality. And even if it were known to be a correct representation of the physical reality, that would not confirm its ability to predict future behaviour (for example because of error accumulation or chaotic effects).


This is a problem for climate modellers, because we only have data for one planet. Verification of future projections by comparison with the actual outcome would require a crystal ball.

Has this difficulty arisen elsewhere? How was the verification problem solved?

You are clearly unhappy with verification of climate models. Can you suggest ways in which you might verify 50 year projections by a climate model without waiting 50 years?

No I can't. And even after 50 years, you will only have *one* 50-year test case, so even if they match well, that could be just a fluke.

To use an engineering analogy, how would you verify that the calculated fatigue life of a turbine blade, perhaps 20,000 flying hours, is correct? Is there any way short of running the engine for 20,00 hours and looking for blade cracks?

EM - Beats me. You'd have to ask someone who actually knew about such things. (Is blade fatigue really the problem rather than creep or degradation of metallic crystal structure? But I get the point)

I'd imagine it is not only the hours that count but the number of take-offs. (I was always impressed sitting cattle-class at the back of a 747 for night-time take offs and seeing the red glow from the rear of its engines as it takes off - presumably visible radiation from its red hot compressor blades.)

But blade lifetime will undoubtedly be something that is well under control. Presumably when an engine is being certified for airworthiness, it will undergo testing through very many take-off cycles at power levels well above. There will be (I imagine) a known relation between the rate of blade degradation under accelerated conditions and under normal conditions, enabling lifetime to be reliably predicted. But that is all what I surmise - I don't acutally know.

I'll refrain from writing a long essay here on computer model testing (either simulation models or mathematical models). All the same....

Validating a model should be a multi-level process - from, at top level, verifying that the system and what it is supposed to do have been correctly understood to, finally at the lowest level, tests of the detail of its behaviour.

After basic sanity checks of a simulation (or mathematical model), you can normally start with very simple cases where solutions can be calculated analytically and where you can compare those solutions with simulation results, to get precise information about simulation errors in those special cases. You can simulate corner cases (very low traffic, heavily overloaded with very high traffic) and check that the simulation behaves as logic says it should under those conditions.

You can test a new model against an existing validated model, if the two models are capable of simulating some common cases. And you can test the model against measurements from existing examples of the real thing, including pushing the model into regions where its accuracy can be expected to fall down, to get information on the zone within which its results can be trusted. You can omit (or switch off) bits of the model to see how this degrades its accuracy and see if this matches up with what theoretical analysis says should happen.

Is that enough of my waffle? I hope you get the general picture, that if the output of a model is going to be used for serious purposes, than it needs to have been validated so that is is known for certain under what conditions its results can be relied upon and under what conditions its results are unreliable.

Nov 19, 2015 at 8:51 PM | Unregistered CommenterMartin A

EMs entire argument seems to be that because it's hard to model climate due to it being an immature science, we should be happy with the first amateurish stab a non-specialist makes at it. We know from history that climate scientists are particularly bad at bringing in suitable specialists to cover gaps in their knowledge, so I have zero confidence that their cobbled together models are even doing what they intend them to do, even if they know what that is.

The fact they refuse to allow external verification only adds to that suspicion.

The golden rule of programming is that nobody ever gets it right first time, the whole art of development is like sculpture, hacking away at the discovered defects until you get as close to perfection as you can. But you keep lookinf for defects and optimisations for as long as you are allowed. This is the pain and the pleasure of software development, and I don't believe for a moment that atmospheric physicists on a mission have the desire or aptitude for such an activity.

I've seen clever amateurs attempt to program. They always have an overblown belief that their first cut of code will work 100%. When the defects are subtle, this can be fatal.

As for doing a model of my own, I agree with EM when he says that the climate is non deterministic, or at least not amenable to numerical simulation. I am constantly thinking of alternative ways to model the environment (I mentioned it a couple of years ago on here) with an idea to make a mode which is very like your hydraulic model in concept - rather than try to mode many small units (which we can't do very well) to model larger smarter units of model. I got sidetracked.

Nov 19, 2015 at 9:13 PM | Unregistered CommenterTheBigYinJames

TheBigYinJames, Martin A

"The Sceptical General Circulation Model"

That was an interesting discussion to read. Thank you. We seem to have covered at least some of the same ground again.☺

A model designed by sceptics would be an excellent idea. Between the three of you there is considerable expertese available.How about a collaberation?

Code for both the GISS E model and CMIP5 are publically available as a starting point if necessary. Dr Spencer had a simple model running in Excel on his website.

An honest model which produced the observed record with a weak CO2 warming would put a sceptic cat among the warmist pigeons. I would be fascinated and a lot less warmist.

There is one danger, that the sceptic model ends up agreeing with all the others about the effect of CO2.

The BEST temperature record started as an independent check on the algorithms used to generate global land temperature averages from the raw data. The group was chosen from outside the climate science community, partly financed by sceptics and included two sceptic scientists. There were high hopes that it would falsify GISS, etc.

When the results were published the BEST record agreed with GISS, Hadcut and the Japanese. Those who financed it rejected it, one of the sceptic scientists became a warmist and the other left in disgust.

Nov 19, 2015 at 11:57 PM | Unregistered CommenterEntropic man

I don't think it would worry me if the model agreed with other models, since it would be based on the best available scientific theories, which all point to warming (and indeed we have seen some warming). I'm a lukewarmer, I expect a bit of warming, and I'd be surprised if any model based on best-available physics didn't show a bit.

Where the real interest of such a model would be would be the more rapid introduction of new entities, such as 'clouds', 'black carbon' etc and see what that does to the model output.

Nov 20, 2015 at 9:30 AM | Unregistered CommenterTheBigYinJames

TheBigYinJames

"Where the real interest of such a model would be would be the more rapid introduction of new entities, such as 'clouds', 'black carbon' etc and see what that does to the model output."

Then you don't really need a global model. Something more regional would suffice. That allows better spatial and temporal resolution. You might get away from the grid completely. Just look at the effect of varying cloud or black carbon on a single grid square with starting conditions set for the latitude and existing climate of interest.

For cloud that would be a tropical or temperate latitude. For black carbon it would be in the Arctic.

Nov 20, 2015 at 12:25 PM | Unregistered CommenterEntropic man

TheBigYinJamea

I just encountered FLUXNET.

It looks like a useful dataset if you want real world data to compare with simulations of local conditions.

Nov 20, 2015 at 1:09 PM | Unregistered CommenterEntropic man

You really are determined that we are heading for a climate catastrophe, aren’t you, EM? Despite the fact that not one climate model has been able to give any truly verifiable prediction, you insist:

A big role of climate models is to project the consequences of different policy options.
Your belief in the predictive power of these utterly worthless bundles of computer code was well-encapsulated in page 3 of this thread:
Similarly the climate change to date is much less than will happen in the future, but only a fool would wait for the disaster to happen before doing anything about it. [sic] (Nov 2, 2015 at 11:22 AM)
Seriously? In spite of the simple fact that no-one has been able to predict much about the weather or the climate for more than a few days hence with any accuracy, you believe that “…climate change to date is much less than will happen in the future…”? You are not basing your arguments on empirical science, here, EM, you are basing them on your religion: “…and an imperfect model is better than no model at all.” Er... no, it is not; you have that completely the wrong way round: no model at all is way, way better than basing serious decisions on an imperfect model – particularly when the models are as massively imperfect as the climate models that you cleave your beliefs so assiduously to. Oh, and by the way, which of these 100+ models is the one that we should really, really, really be believing in?

Nov 21, 2015 at 10:12 AM | Registered CommenterRadical Rodent