Seen elsewhere
Twitter
Support

 

Buy

Click images for more details

Recent posts
Recent comments
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Desperate woo | Main | The bureaucrat as green activist. Again. »
Monday
Mar042013

IPCC statistics ruled illegal

Bayesian statistics, the approach favoured by the IPCC in its assessments of the world's climate, has been ruled illegal by the Appeal Court in London. As the judge explained in a case revolving around possible causes of a fire:

Sometimes the "balance of probability" standard is expressed mathematically as "50 + % probability", but this can carry with it a danger of pseudo-mathematics, as the argument in this case demonstrated. When judging whether a case for believing that an event was caused in a particular way is stronger that the case for not so believing, the process is not scientific (although it may obviously include evaluation of scientific evidence) and to express the probability of some event having happened in percentage terms is illusory.

David Spiegelhalter notes that "[to] assign probabilities to events that have already occurred, but where we are ignorant of the result, forms the basis for the Bayesian view of probability". That being the case, one wonders whether this opens up the possibility of legal challenges to the IPCC assessment reports.

For once, however, I find myself on the IPCC's side. I imagine a higher court will set the ruling aside.

PrintView Printer Friendly Version

Reader Comments (68)

"I imagine a higher court will set the ruling aside."
Do you have a prior for that? ;-)

Mar 4, 2013 at 3:06 PM | Registered CommenterHaroldW

Leaving aside the legal issues as to whether there will be an appeal, and whilst I have not studied the judgment in detail, there is much to commend about drawing a distinction between the probability of an event happening and that of an event having happened.

As regards the IPCC's reports the problem stems from ascribing a cause to the late 1970s/1998 warming and concluding that it was brought about by causes different to that causing the 1920 to 1940 warming.

As regards the former, the IPCC concluded that the late 1970s warming must have been caused by CO2 simply because they could not think of anything else that may account for the warming. The problem with this logic is that it is only as good as one's state of knowledge, understanding and imagination. The less knowledge, understanding and imagination that you have, the more likely you are to conclude that X is responsible simply because you cannot think of any other factor that might be responsible and the real reason why you cannot think of another possible factor is ignorance in knowledge, understanding and imagination.

You have to know absolutely everything (ie., any and all possibilities however remote) before you can utilise the Sherlock Holmes rational. If not the possibility remains that you have not eliminated everything leaving standing the improbable which improbable must be the truth.

Mar 4, 2013 at 3:12 PM | Unregistered Commenterrichard verney

Derp derp don't mind me I'm just an incredibly dumb judge derp derp derp

Mar 4, 2013 at 3:30 PM | Unregistered CommenterLuis Dias

Not so much a comment but a request. Could you provide the URL for the Table on electricity generation? I do some of work on this topic, and am in correspondence with my MP about wind power, but really need some more sources of the primary data. The watt.hour is particularly interesting. I would like to down load data of this type.

Many thanks, Robin

Mar 4, 2013 at 3:37 PM | Unregistered CommenterRobin Edwards

@ Richard

Indeed. The Sherlock Holmes reasoning was, in hindsight, a perfectly valid reason to blame witches for poor harvests in the seventeenth century.

Something similar is certainly going on today.

Mar 4, 2013 at 4:00 PM | Unregistered CommenterJustice4Rinka

in the meanwhile, we can all grab our posteriors.

Mar 4, 2013 at 4:08 PM | Registered Commenterjferguson

Using a "balance of probability" test, if the IPCC used Methodology A to proves AGW was happening instead of Methodology B or C, I must conclude there is something wrong with Methodology A.

Mar 4, 2013 at 4:29 PM | Unregistered CommenterBruce

Electricity generation by fuel type is here

PS=Pumped Storage
NPSHYD=non-pumped-storage hydro
INTxxx=interconnectors from France/Netherlands/Ireland

Mar 4, 2013 at 4:46 PM | Unregistered CommenterJack Hughes

Bayes' Theorem is very good for correcting mistakes in one's betting behavior. It has no applications beyond that. It cannot help in making inferences about the world that is independent of the person placing the bets. It is not useful in science. It belongs to the arena of decision making under uncertainty. That is the realm of policy.

Mar 4, 2013 at 5:04 PM | Unregistered CommenterTheo Goodwin

"The less knowledge, understanding and imagination that you have, the more likely you are to conclude that X is responsible simply because you cannot think of any other factor that might be responsible and the real reason why you cannot think of another possible factor is ignorance in knowledge, understanding and imagination."

Well said, Richard Verney.

Some might be surprised by the extent of imagination that is required of an able scientist, yet it is an essential part of the job description. The scientist is required to imagine as many possible reasons as to why their theory might be wrong, and, as far as is reasonable, detail why they rejected them.

A person who enters their studies with a limited imagination and a preconception that anthropogenic CO2 is responsible for late 20th-century warming runs the risk of examining just that.

Sceptics shouldn't have to do their job for them, in public, and with no financial reward.

Mar 4, 2013 at 5:12 PM | Unregistered Commentermichael hart

"The doctors say there's a 50/50 chance of Nordberg living."

"But there's only a 10 percent chance of that."

Andrew

Mar 4, 2013 at 5:16 PM | Unregistered CommenterBad Andrew

Well having read the judgment, I think the only thing one can be certain of is that truth is the first casualty of law.

The council seem to me to have been completely incompetent in the way the rubbish tip was run. Smells like they didn't want to spend any money to keep anything working properly. But when there was an incident - then they got experts on site pretty damned fast.

Examples:

A fairly substantial three phase cable was left live coiled up against the wall - for nearly 10 years!

This cable had signs that it had arced to something in the past.

When one of the machines developed a fault it didn't just trip the first circuit breaker, it tripped a second and in fact, also blew the electricity board fuses. This should not happen.

At the time, they asked this guy on site - he was supposed to be repairing something else - probably the gates looking at the judgment.

They got him to repair one baler by cannibalising another baler for parts. At some point, they would have had to call him back in to repair the baler he'd now disabled.


I'd say there was something very wrong with the whole safety culture for this to have been like this. Still presumably the council will get a nice new building paid for by the insurance company.

Cheers,


Nick.

Mar 4, 2013 at 5:27 PM | Unregistered CommenterNickM

Although now retired as a scientist, I was taught and still believe that statistics deal only with identification of different populations of data. Statistics alone (including Bayesian) cannot demonstrate causation. The methods used to produce the populations determine whether causation can be demonstrated. This implies that observation alone cannot be used to pick putative causal agents. If observations are used to produce models (e.g., eclipses in the solar system), then future observations and statistics can be used to demonstrate causation.

Mar 4, 2013 at 6:09 PM | Unregistered CommenterMorley Sutter

I'm still confused as to what a 20% chance of rain means. If I was to live that day five times, I'd get rained on once?

Mar 4, 2013 at 6:20 PM | Unregistered CommenterJames Evans

...I'm still confused as to what a 20% chance of rain means. If I was to live that day five times, I'd get rained on once?...

Presumably your other option is that, if you go out on that day, 1/5 of your body will get wet....

Mar 4, 2013 at 6:35 PM | Unregistered CommenterDodgy Geezer

I am not sure what bayesian statistics are, but what I have read of Climate Science, leads me think this might be an example

"Banned in nine countries for its sexual potency, Sex Panther cologne is known to make you an irresistible object of interest to the ladies. Made with bits of real panther, studies show this formidable and pungent cologne works every time, sixty percent of the time. "

Mar 4, 2013 at 6:36 PM | Unregistered CommenterDMC

http://notalotofpeopleknowthat.wordpress.com/2013/03/04/ethiopian-droughts-linked-to-el-nino/

Mar 4, 2013 at 6:47 PM | Unregistered CommenterPaul

DMC , where can I get Sex Panther cologne? Academic interest, of course.

Mar 4, 2013 at 6:50 PM | Unregistered Commenterartwest

"Presumably your other option is that, if you go out on that day, 1/5 of your body will get wet...."

I thought it meant it would rain for 4 hours 48 minutes that day.

Mar 4, 2013 at 7:28 PM | Unregistered Commentersteveta_uk

OT
Just in case some you across the pond might be interested in attending.

Tom Nelson had a blurb today about Richard Lindzen being at the Oxford Union for a discussion this Friday evening. Supposedly free, but you might want to check on that.

https://www.oxford-union.org/term_events/al_jazeera2?SQ_CALENDAR_DATE=2013-03-08

Mar 4, 2013 at 7:38 PM | Unregistered CommenterBob Koss

Mar 4, 2013 at 6:20 PM | James Evans

I'm still confused as to what a 20% chance of rain means. If I was to live that day five times, I'd get rained on once?

Yes, that's a very good way of expressing it.

For example, if the weather forecast suggests that conditions will be those associated with isolated showers, it can be expected that some places will get a shower that day, but it will be impossible to predict exactly which places will get wet and which will stay dry, as there are essentially random factors at work determining when and where convective clouds form.

However, experience (and/or numerical models) may show that particular conditions give rise to showers occurring over (say) 20% of the area during the day. Hence any particular location has a 20% chance of a shower that day.

If very similar conditions occurred the next day, there might once again be showers over 20% of the area, but a different 20%.

Over 5 such days then individual locations should get wet once (on average). Of course it won't really work out nice and neat with everywhere getting wet exactly once - some would get wet more than once, some not at all. But essentially, yes, you are right that if the forecast is for a 20% chance of rain, if you lived that day 5 times you'd expect to get rained on once.

Mar 4, 2013 at 7:45 PM | Registered CommenterRichard Betts

@Theo 5:04:
>>Bayes' Theorem is very good for correcting mistakes in one's betting behavior. It has no applications beyond that.

Good Heavens man. Turn off your cell phone because Bayes' theorem is used to detect the bits coming in from the tower. Every. Single. Bit. Scientific applications of Bayes' theorem abound.

Don't bother trying to reply...your computer's networking chip uses Bayes' theorem so it will be futile.

Mar 4, 2013 at 7:48 PM | Unregistered CommenterGDixon

Mar 4, 2013 at 6:20 PM | James Evans

I'm still confused as to what a 20% chance of rain means. If I was to live that day five times, I'd get rained on once?

Yes, that's a very good way of expressing it.
For example, if the weather forecast suggests that conditions will be those associated with isolated showers, it can be expected that some places will get a shower that day, but it will be impossible to predict exactly which places will get wet and which will stay dry, as there are essentially random factors at work determining when and where convective clouds form.

Mar 4, 2013 at 7:45 PM Richard Betts


Richard - can you enlarge a bit on what you mean by "essentially random factors" ?

When do climate influences become "essentially random"?

.... and when they do, can we still attempt to model them?

Mar 4, 2013 at 7:53 PM | Registered CommenterFoxgoose

This is a delightful study in the philosophy of probability.

First, on the question of whether past events can properly be said to have a probability, one would be able to say rather that that they had a probability, and that the probability at the time of the event was such-and-such. This is an allowable interpretation for the aleatoric probability of random events in the past. However, as people have already noted, the 'probability' being discussed here is not entirely aleatoric but is mostly epistemic - it's not a matter of the outcome being a priori undetermined, but of the outcome being unknown.

Epistemic uncertainty is represented by "Bayesian belief", which is technically not a probability, but a measure of an agent's knowledge or belief about something. There are several different ways that have been proposed for representing belief - fuzzy logic, Dempster-Shafer belief, and Bayesian belief are just a few examples. Each approach offers interpretations of the measure of belief, and mathematical methods of combining them in compound logical statements. Thus if your belief in proposition A is Bel(A) and your belief in B is Bel(B) then your belief in (A OR B) might be Bel(A OR B) = Max(Bel(A), Bel(B)). There are usually rules for AND, OR, NOT, and IMPLIES.

"Bayesian belief" models belief usng the same mathematical rules of combination as probability. If the beliefs in the component propositions are equal to their probabilities, then the belief in any compound proposition will be the Bayesian probability of that compound.

However, Bayesian belief is NOT a probability. It is a subjective measure of what people know, and because different people know or believe different things, the Bayesian belief of a single event can have different values for each of them. From your point of view, the 'probability' of me having an Ace is 4-in-52. From my point of view, it's equal to 1.

When we talk about the probability of objective events in the world, like quantum randomness or dice, it makes no sense for the probability to be subjective. The event has a certain definite probability determined by the physics, irrespective of what we think. (Ignoring pesky arguments about whether the laws of physics are deterministic.) However, it is also the case that we have no means to be able to directly observe what that probability is. All we have are mathematical models, and the outcome of observations.

The probability output by a model is actually a belief - a different model could output a different belief. "Knowledge" is simply a model of reality able to make predictions with some accuracy. And the outcome of experiments and observations updates our prior beliefs via the process of Bayesian updating. Although reality is not subjective, our experience of it is, and thus we only have direct mental access to beliefs, not to actual probabilities. So from this point of view, talking about probabilities as a separate thing becomes rather hypothetical, and it makes sense to blur the distinction and refer to Bayesian beliefs as probabilities for all practical purposes.

Right. Having got the heavy philosophy out of the way, we can look at the other question they raise, which is what to conclude when offered a choice of two extremely unlikely options. This is actually a significant problem for forms of machine intelligence that try to use Bayesian methods for representing belief.

The problem here is that the assessment of the probabilities is based on a statistical model, an algorithm that outputs a probability when provided with a description of an event or chain of events. The judge constructs a mental model by which he estimates the likelihood of each sequence of events. If the model is accurate, then the conclusions reached are correct. If as the judge said there were only two possible causes of the fire, and one was much likelier than the other, then the 'probability' is greater than 50% that that was the cause.

However, one also has to include model uncertainty, which is the measure of our belief that our model is correct. In many circumstances this can safely be ignored, because we are reasonably confident of the model, and the probabilities output are such that small variations will have little effect. However, out in the extreme tails of the distribution where the very improbable events are found, model uncertainty can have a big effect. To put it simply, when faced with two near-impossible options, the possibility that you're making an unwarranted assumption somewhere becomes the third, most likely option.

The IPCC make this same distinction too, in the definitions for the terms "likelihood" and "confidence". "Likelihood" means the probability of an outcome assuming the model is correct. "Confidence" means the probability that the model is correct. So when they say "it is very likely that..." it doesn't actually tell you what the probability is.

Mar 4, 2013 at 7:55 PM | Unregistered CommenterNullius in Verba

Mar 4, 2013 at 6:09 PM | Morley Sutter

This implies that observation alone cannot be used to pick putative causal agents. If observations are used to produce models (e.g., eclipses in the solar system), then future observations and statistics can be used to demonstrate causation.

Yes, I agree with this too. I often get told here that climate scientists should not use models, and use observations instead. But as you say, although observations can tell you what it happening, they cannot tell you why - unless you are observing a controlled experiment, which is not possible with the global climate as there is only one Earth! So, we use observations and experiments and existing theory (eg: fluid dynamics) to produce models which are then used to try to understand and explain the observed behaviour of the atmosphere and forecast its behaviour in the future.

Mar 4, 2013 at 8:02 PM | Registered CommenterRichard Betts

I probably ought to clarify, before somebody picks me up on it, when I say "And the outcome of experiments and observations updates our prior beliefs via the process of Bayesian updating", people usually don't actually use Bayes in their conceptualisation. Human belief is yet another method, distinct from all those listed above. Only scientists and computers use Bayes.

Mar 4, 2013 at 8:08 PM | Unregistered CommenterNullius in Verba

Hi Foxgoose

I mean things like the classic "flapping of a butterflies wings", that sort of thing.

It's impossible to pin down every last detail, but the overall behaviour can often be approximated statistically. If there is some external effect then this helps with predictability to some extent.

Take the drunkards walk for example. All things being equal, any step he takes can by in any random direction. However if he's staggering down a hill, he's more likely to step downhill than uphill because gravity is also having an influence.

Mar 4, 2013 at 8:13 PM | Registered CommenterRichard Betts

Over 5 such days then individual locations should get wet once (on average). Of course it won't really work out nice and neat with everywhere getting wet exactly once - some would get wet more than once, some not at all. But essentially, yes, you are right that if the forecast is for a 20% chance of rain, if you lived that day 5 times you'd expect to get rained on once.

The odds of getting rained on from 0 to 5 times are as follows:

0 times 32.768%
1 time 40.96%
2 times 20.48%
3 times 5.12%
4 times 0.64%
5 times 0.032%

I have way too much time on my hands.

Mar 4, 2013 at 8:24 PM | Unregistered CommenterTerryS

Richard,

"I often get told here that climate scientists should not use models..."

I agree that it is wrong to say climate scientists shouldn't use models. It's unavoidable.

I think the distinction being made is between models used for developing understanding and models being used for prediction, and the rule they're referring to ought to be that you shouldn't use unvalidated models for prediction.

The terms "verification" and "validation" come from quality control. "Verification" means confirming that a process conforms to its specification. For a model you need to document what accuracy the output has and over what range of circumstances it is known to have it - i.e. asusmptions and approximations used, the range of inputs over which it has been tested, etc. "Validation" is confirming that the specification is sufficient for the purpose to which it is being used. So a validated model means selecting a model suitable for the application, showing that the prerequisites are satisfied and the output is sufficiently accurate to meet the user's requirement.

The issue is that climate models, interesting and useful as they are, have not been shown to be sufficiently accurate at predicting climate a century hence to meet the stringent requirements people feel should be needed for steering the global economy and spending trillions of dollars of Other People's Money.

The accuracy needed for writing another academic paper in an obscure climate journal that no politician will ever see is somewhat less, and I think most people would be reasonably happy that they're more than sufficient for that. In fact, for that purpose, I doubt most people would even be bothered if you built your models in Microsoft Excel.

It is perhaps a subtle distinction, that the complainers haven't made sufficiently clear.

I hope that helps.

Mar 4, 2013 at 8:30 PM | Unregistered CommenterNullius in Verba

Hopefully, scientific issues will not be settled by courts but this case reminds me of a tragic incident in the Netherlands a few years ago. A nurse was accused of killing several patients. A statistician computed the (post hoc) probability that she had done it and she was convicted on the basis of a high number. After a few years in jail her case was re-opened and she was found not guilty. Perhaps this made judges here reluctant to accept statistical information. As a frequentist I consider probabilities of single events nonsense (don't tell that to physicists of quantum mechanics). Therefore, '20 percent chance of rain' should be interpreted as predictions of rain in similar cases prove to be true in 20 percent of the cases. Makes perfect sense, meteorologists can determine that number, but tomorrow it will rain or it will not rain. With that 20 percent I will leave my umbrella at home.

Mar 4, 2013 at 8:33 PM | Unregistered CommenterMindert Eiting

What does a 20% chance of rain mean ?

To ordinary folk , it means dont't bother taking your umbrella, unless you are having your hair done at a top salon

Mar 4, 2013 at 8:36 PM | Unregistered CommenterEternalOptimist

Mar 4, 2013 at 8:02 PM | Richard Betts

So, we use observations and experiments and existing theory (eg: fluid dynamics) to produce models which are then used to try to understand and explain the observed behaviour of the atmosphere and forecast its behaviour in the future.

Richard, does this not also mean that your models can be tested and verified ONLY in the future and not by any other means?

Morley Sutter

Mar 4, 2013 at 8:40 PM | Unregistered CommenterMorley Sutter

Nate Silver is a great champion of Bayes Theorem. The Signal and the Noise is well worth reading for the expositions of Bayes Theorem even if he struggled a bit with the chapter on climate change.

Mar 4, 2013 at 8:46 PM | Unregistered Commenterpotentilla

EternalOptimist at 8:36 pm, with 3 minutes between our comments, did you read my comment before you wrote yours or not?

Mar 4, 2013 at 8:54 PM | Unregistered CommenterMindert Eiting

When judging whether a case for believing that an event was caused in a particular way is stronger that the case for not so believing, the process is not scientific (although it may obviously include evaluation of scientific evidence) and to express the probability of some event having happened in percentage terms is illusory.

So, as the judge said to the jury, muttering irritably under his breath, reasonable doubt... is ... doubt that is reasonable.

Mar 4, 2013 at 9:04 PM | Registered CommenterPharos

"Bayesian statistics, the approach favoured by the IPCC in its assessments of the world's climate..."

Is it the use of Bayesianism that's the problem, or the wrong choice of prior, as you reported last month? http://www.bishop-hill.net/blog/2013/1/25/uniform-priors-and-the-ipcc.html

Mar 4, 2013 at 9:53 PM | Unregistered CommenterDuncan

@ GDIxon 7:48 PM

Thanks for taking the time to call out Theo on his utterly inane comment. I'd considered doing it, but thought better of it, suspecting it would end in an exchange about as rewarding as discussing evolution with a creationist. His silence since may indicate a level of educability I hadn't expected, so perhaps I am too cynical.

Mar 4, 2013 at 9:55 PM | Unregistered Commenterjim west

Duncan

Re-read what I wrote above. I have no problem with the IPCC's use of Bayes.

Mar 4, 2013 at 9:56 PM | Registered CommenterBishop Hill

Bish

off topic but this looks significant if true

http://principia-scientific.org/supportnews/latest-news/98-breaking-nasa-u-turn-admits-global-warming-bias-on-sun-s-key-role.html

Mar 4, 2013 at 10:13 PM | Unregistered CommenterRandy Wildfire

Hi Richard

Useful comments, thank you.

It is interesting, is it not, how relative late in the mathematical day probability came along? It was I think the last major branch invented, in the early 1800s I think?

If one had to guess which was nailed down first, probability theory or calculus, the latter seems harder and hence I would think most would ssy it happened later.

Mar 4, 2013 at 10:18 PM | Unregistered CommenterJustice4Rinka

Mar 4, 2013 at 7:48 PM | GDixon

I did not mean to suggest that Bayes' theorem has no applications in environments that are well understood. My topic was inferences about aspects of the world that are not yet well understood and have not yet been described by a set of well confirmed physical hypotheses (aka physical law).

Mar 4, 2013 at 10:24 PM | Unregistered CommenterTheo Goodwin

Mar 4, 2013 at 8:40 PM | Morley Sutter

Richard, does this not also mean that your models can be tested and verified ONLY in the future and not by any other means?

If this was a purely scientific issue dealing with short timescales then yes, that would be the preferred way to do it. However, the issue is complicated by the potential implications of what the models are trying to predict. If we wait for decades and find out the the worst-case scenarios have been realised, it may be too late to do anything. Of course this may not happen, but we can't be certain either way. It makes sense to try to increase confidence in the models in other ways, e.g.: testing against past data (whilst being careful not to simply test against calibration data, as that would of course not prove anything!)

Having said that, the models developed in the late 60s and early 70s, which form the basis of the models we use now, have now had their predictions tested against about 40 years of observations. The models projected warming of global mean temperature, with faster warming over land and in the Arctic, and all these have happened in reality since the 70s. The recent slowdown in warming suggests that the higher end of previous projections may now be less likely, but since the general trend decade by decade has been a warming as predicted, this still indicates that ongoing GHG emissions will continue to cause warming.

Of course, how we respond to this is a different question that involves politics and risk assessment and not just science. We don't know for certain how fast the future warming will be or what it's consequences will be. We also know that policies to reduce emissions will also have their own economic consequences. Weighing all these up and deciding how to respond is a political and personal decision.

Mar 4, 2013 at 10:54 PM | Unregistered CommenterRichard Betts

"The IPCC make this same distinction too, in the definitions for the terms "likelihood" and "confidence". "Likelihood" means the probability of an outcome assuming the model is correct. "Confidence" means the probability that the model is correct. So when they say "it is very likely that..." it doesn't actually tell you what the probability is."

Thank you, Nullius in Verba. Your exposition is brilliant.

The great drawback of subjective statistics is that it leaves you arguing about your "models" yet offers no help in choosing between opposing models. Worse yet, whatever results come from subjective statistics, you are limited to your preconceived notions - the models. The world can never surprise a subjective statistician. For those who have considerable experience in the world, that fact alone proves that subjective statistics cannot help us learn about the world. The world is full of shocking surprises and science reveals shocking matters. Subjective statistics can never get us beyond our preconceived notions.

Mar 4, 2013 at 10:56 PM | Unregistered CommenterTheo Goodwin

Morley

If the exact conditions whiche led to the conditions observed now can be seen to have occurred in the past, then you can check predictions against mthe past. otherwise....is this too obvious?

Mar 4, 2013 at 11:11 PM | Unregistered Commenterdiogenes

Nullius

I invite you to read Jim Bouldin's posts on his Ecolgically Oriented blog for a set of well-reasoned articles on the fact that mainstream climate science will not accept criticism of the high-profile methods in use.
http://ecologicallyoriented.wordpress.com/2013/03/04/severe-analytical-problems-in-dendroclimatology-part-nine-the-pnas-review/

Mar 4, 2013 at 11:15 PM | Unregistered Commenterdiogenes

NiV

I think some of the claimers also are under the impression that climate by its very nature cannot be predicted which appears to be the IPCC's position. I don't have the reference but it has been posted here a zillion times.

Mar 4, 2013 at 11:35 PM | Unregistered CommenterRandy Firefly

There is fundamental problem of interpretation applying estimated probabilities to one off events that have already occurred.

Let's say I got wet last Sunday.

Meteorologist 1 says that was because Sunday's conditions gave a 95% chance of rain, and it did rain.

Meteorologist 2 however says that there was only a 5% cchance of rain under Sunday's conditions, and I was just unlucky to face a rare downpour in these circumstances.

who is right? What if a man's life or freedom depended on figuring out which one is right to pend solely on your judgement of this question?

Mar 4, 2013 at 11:47 PM | Unregistered CommenterCopner

Welcome back Richard, many thanks for your contributions.

If you get chance I would appreciate your comments upon the now "perceived" potential 60 year "rate of change" cycle and why, whilst claimed to be an oceanic phenomenon, it appears to be more manifest in land stations and less evident in ocean temps?

regards

gs

Mar 4, 2013 at 11:57 PM | Registered CommenterGreen Sand

Mar 4, 2013 at 10:54 PM
Richard Betts,
Thank you for your forthright responses.
I guess I am not of the same belief as you: the ability of humans to alter temperature is miniscule, the financial and social costs are immense and the models not good enough to be predictive of future temperatures (i.e., models predicting too rapid rise of temperature, failing to predict the present slowing of rise and predicting a hotspot in the atmosphere that has not been found) do not produce confidence in our ability to prevent a rise in temperature, in the models or in the prediction of CAGW.

Mar 4, 2013 at 11:11 PM
Diogenes:
What is the evidence that CO2 alone drove past temperatures? Are there not observations to suggest that temperature can drive CO2? What "conditions" are you referring to?
Models that are "tuned" to the past are not good predictors of the future.
Morley

Mar 5, 2013 at 12:09 AM | Unregistered CommenterMorley Sutter

Mar 4, 2013 at 4:00 PM | Justice4Rinka
-----------------------

Burn the witch http://youtu.be/zrzMhU_4m-g

Mar 5, 2013 at 3:58 AM | Unregistered CommenterStreetcred

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>