## Keenan on McKitrick

*Doug Keenan has posted a strong critique of Ross McKitrick's recent paper on the duration of the pause at his own website. I am reproducing it here.*

McKitrick [2014] performs calculations on series of global surface temperatures, and claims to thereby determine the duration of the current apparent stall in global warming. Herein, the basis for those calculations is considered.

Much of McKitrick [2014] deals with a concept known as a “time series”. A *time series* is any series of measurements taken at regular time intervals. Examples include the following: the maximum temperature in London each day; prices on the New York Stock Exchange at the close of each business day; the total wheat harvest in Canada each year. Another example is the average global temperature each year.

The techniques required to analyze time series are generally different from those required to analyze other types of data. The techniques are usually taught only in specialized statistics courses.

**Assumptions**

The calculations of McKitrick [2014] rely on certain assumptions. In principle, that is fine: some assumptions must always be made, when attempting to analyze data. A vital question is almost always this: *what* assumptions should be relied upon? The question is vital because the conclusions of the data analysis commonly depend upon the assumptions. That is, the conclusions of the analysis can vary greatly, depending upon the assumptions.

The problem with McKitrick [2014] is that it relies on assumptions that are wholly unjustified—and, worse, not even explicitly stated. Hence, I e-mailed McKitrick, saying the following.

The analysis in your paper is based on certain assumptions (as all statistical analyses must be). One problem is that your paper does not attempt to justify its assumptions. Indeed, my suspicion is that the assumptions are unjustifiable. In any case, without some justification for the assumptions, there is no reason to accept your paper's conclusion.

The issue here is not specific to statistics. Rather, it pertains to research generally: in any analysis, whatever assumptions are made need to be justified.

McKitrick replied, claiming that “The only assumption necessary is that the series [of temperatures] is trend stationary”. The term “trend stationary” here is technical, and is discussed further below.

There are two problems with McKitrick's claim. The first problem is that trend stationarity is not the only assumption made by his paper. The second problem is that the assumption of trend stationarity is unjustified and seemingly unjustifiable. The next sections consider those problems in more detail.

**The assumption of linearity**

McKitrick claimed that his paper only made one assumption, about trend stationarity. In fact, the paper also assumes that all the relevant equations (for the noise) should be linear. Hence, I e-mailed McKitrick back, saying the following.

Stationarity is not the only assumption. Your paper also includes some assumptions about linearity … I do not see how [linearity] can be justified….

McKitrick did not respond. Five days later, I sent another e-mail, again raising the problem of assuming linearity. This time McKitrick replied at length. His reply, however, did not mention linearity.

The climate system is nonlinear. This is accepted by virtually everyone who has done research in climatology. For example, the IPCC has previously noted that “we are dealing with a coupled non-linear chaotic system” [AR3, Volume I: §14.2.2.2]. Hence the assumption of linearity is very dubious. There might be occasions where it is suspected that a linear approximation is appropriate, but if so, then some argument for the appropriateness must be given.

**The assumption of trend stationarity**

For technical details of what it means for a time series to be trend stationary, see the Wikipedia article. This section considers issues that do not require those details.

McKitrick's first e-mail to me acknowledged that trend stationarity “makes an enormous difference for defining and interpreting trend terms”. Simply put, if the trend in global temperatures is not assumed to be trend stationary, then the calculations of McKitrick [2014] are not valid.

The abstract of McKitrick [2014] states that the calculations used in the paper are “valid as long as the underlying series is trend stationary, *which is the case for the data used herein*” (emphasis added). The emphasized claim seems to imply that trend stationarity of the temperature data is an established fact.

The body of the paper says that the temperature data is “assumed to be trend-stationary”. The paper makes no attempt to justify the assumption. At least, though, the body of the paper acknowledges that trend stationarity is an assumption, rather than a fact.

McKitrick's first e-mail to me said that “decisive tests [for trend stationarity] are difficult to construct”. Thus, McKitrick seems to be acknowledging that he has no decisive statistical tests to justify the assumption of trend stationarity.

McKitrick's first e-mail also referred to a workshop, held in 2013, at which “there were extended discussions on whether global temperature series are stationary or not”. Thus, this effectively acknowledges that McKitrick knows trend stationarity is nowhere near being an established fact.

McKitrick's second e-mail attempted some justification for assuming trend stationarity. It said this: “The reason I do not accept the nonstationarity model for temperature is that it implies an infinitely large variance, which is physically impossible, and also that the climate mean state can wander arbitrarily far in any direction, which does not accord with life on Earth”. The first claim, about “an infinitely large variance”, is false; so it will not be discussed further here. The second claim, about how “the climate mean state can wander arbitrarily far in any direction”, is true in principle.

To understand McKitrick's second claim, first note that for “climate mean state” it is enough to consider simply “global temperature”. If the global temperature were truly non-stationary, then it could indeed wander arbitrarily far, up and down; i.e. it could become arbitrarily hot and arbitrarily cold. We know that global temperatures do not vary *that* much. Hence, global temperatures cannot be non-stationary—this is McKitrick's argument.

McKitrick's argument is easily seen to invalid. Consider a straight line (that is not perfectly horizontal). The straight line goes arbitrarily far up and arbitrarily far down—i.e. arbitrarily far in both directions. A straight line, though, is the basis for the calculations of McKitrick [2014]. Thus, if McKitrick's argument were correct, it would invalidate the basis for McKitrick's own paper.

McKitrick's argument against non-stationarity was raised earlier, by someone else, on the Bishop Hill blog. In response, an anonymous commenter (Nullius in Verba) left a perspicacious comment. The comment is excerpted below.

… everyone agrees that a non-stationary … process is not physically possible for temperature, in exactly the same way as they agree that a non-zero linear trend isn't physically possible. If you extend a non-zero trend forwards or backwards in time far enough, you'll eventually wind up with temperatures below absolute zero in one direction, and temperatures hotter than the sun's core in the other. For the *actual* underlying process to be a linear trend is physically and logically impossible.

However, nobody objects on this basis because everybody knows it is only being used as an approximation that is only considered valid over a short time interval. ….

In exactly the same way, a non-stationary … process is being used as an approximation to a stationary one, and is only considered valid over a short time interval. It arises for exactly the same reason….

Statisticians use non-stationary [models] routinely for variables that are known to be bounded, for very good reason. They're not stupid.

Additionally, McKitrick's argument is an appeal to physics. Yet using physics to exclude a statistical assumption is inherently very dubious. For some elaboration on this, see the Excursus below.

As noted above, several researchers have contended that non-trend-stationarity might be an appropriate assumption for global temperatures. An early paper making that contention is by Woodward & Gray [1995]. That paper currently has 68 citations on Google Scholar, including several since 2013. (One of the latter even presents a physics-based rationale *for* non-trend-stationarity: Kaufmann et al. [2013].)

There are other papers that do not cite Woodward & Gray, but which also contend for considering non-trend-stationarity; e.g. the paper of Breusch & Vahid [2011]—which is part of the Australian Garnaut Review. Such contending has even appeared in an introductory textbook on time series: *Time Series Analysis and Its Applications* [Shumway & Stoffer, 2011: Example 2.5; see too set problems 3.33 and 5.3]. Contentions for non-trend-stationarity would not appear in so many respected sources, over so many years, if McKitrick's appeal to a simple physical argument had merit.

It is worth reviewing how McKitrick's story on trend stationarity of the global temperature series changed. First, the abstract of the paper claimed that the temperatures are trend stationary—seemingly an established fact. Second, the body of the paper mentions, in one sentence, that trend stationarity is actually an assumption, rather than a fact—but it gives no justification for the assumption. Third, McKitrick's first e-mail acknowledged that there have been no tests to justify the assumption and also that the validity of the assumption is debated. Fourth, McKitrick's second e-mail, in response to my criticisms of the foregoing, attempted some justifying of the assumption—but with a justification that is easily seen to be invalid, as well as not supported by many other researchers who have studied the issue.

**Statistical models**

Whenever data is analyzed, we must make some assumptions. In statistics, the assumptions, collectively, are called a “statistical model”. There has been much written about how to select a statistical model—i.e. about how to choose the assumptions.

This issue is noted by the book *Principles of Statistical Inference* (2006). The book's author is one of the most esteemed statisticians in the U.K., Sir David Cox. The book states this: “How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis”. In other words, choosing the assumptions is often the *difficult* part of a statistical analysis.

Another book that is relevant is *Model Selection* [Burnham & Anderson, 2002]. This book currently has about 25 000 citations on Google Scholar—which seems to make it the most-cited statistical research work published during the past quarter century. The book states the following (§8.3).

Statistical inference from a data set, given a model, is well advanced and supported by a very large amount of theory. Theorists and practitioners are routinely employing this theory … in the solution of problems in the applied sciences. The most compelling question is, “what model to use?” Valid inference must usually be based on a good approximating model, but which one?

The book also refers to the question “What is the best model to use?” as *the critical issue* (§1.2.3).

The selection of a statistical model tends to be especially difficult for time series. Indeed, one of the world's leading specialists in time series, Howell Tong, stated the following, in his book *Non-linear Time Series* (§5.4).

A fundamental difficulty in statistical analysis is the choice of an appropriate model. This is particularly pronounced in time series analysis.

Note that, in making the statement, Tong does not assume that time series are linear—as the title of his book makes clear.

**Concluding remarks**

What McKitrick [2014] has done is skip the difficult part of statistical analysis. That is, McKitrick does not genuinely consider the choice of statistical assumptions. Instead, he just picks some assumptions, with negligible justification, and then does calculations.

Realistically, then, McKitrick [2014] does not present a statistical analysis—because the paper is missing a required part. If McKitrick had been forthcoming about this, that would have been fine. For example, suppose McKitrick had included a disclaimer like the following.

The calculations in this work rely on assumptions: about linearity and trend stationarity (and normality). Those assumptions are unjustified and might well be unjustifiable. Relying on different assumptions might well lead to conclusions that are very different from the conclusions of this work. Hence, the conclusions of this work should be regarded as highly tentative.

Such a disclaimer would have been fair and honest. Instead, the paper, especially the abstract, greatly misleads: and McKitrick must have known that it does so.

Finally, methods to detect trends in global temperatures have been studied by the Met Office. A consequence of the study is that “the Met Office does not use a linear trend model to detect changes in global mean temperature” [HL969, *Hansard* U.K., 2013–2014].

**Excursus: Realistic models?**

A statistical model does not need to be physically realistic. An example will illustrate this. Suppose that we have a coin. We toss the coin a few times, with the outcome of each toss being either Heads or Tails. We might then make two assumptions. First, the probability of the coin coming up Heads is ½. Second, the result of one toss is unaffected by the other tosses.

The two assumptions comprise our statistical model. The assumptions obviously elide many physical details: they do not tell us what type of coin was used, how long each toss took, the path of the coin through the air, etc. The assumptions, though, should be enough to allow us to analyze the data statistically.

The set of assumptions—i.e. the model—also differs from reality. For instance, our assumption that a coin comes up Heads with probability ½ is only an approximation. In reality, the two sides of a coin are not exactly the same, and so the chances that they come up will not be the same. It might really be, for instance, that the probability that a coin comes up Heads is 0.500001 and the probability that it comes up Tails is 0.499999. Of course, in almost all practical applications, this difference will not matter, and our assumption of a probability of ½ will be fine.

There is also a second way in which our model of a coin toss differs from reality. We can predetermine the outcome of a toss by measuring the position of the coin prior to the toss, measuring the forces exerted on the coin at the start of the toss, and determining the air resistances as the coin was about to go through the air (all this is in principle; in practice, it might not be feasible [Strzalko et al., 2010]). Thus, a real toss is deterministic: it is not random at all. Yet we modelled the outcome of the toss as being random.

This second way in which our model differs from reality—incorporating randomness where the actual process is deterministic—is fundamental. Yet, by modelling the outcome of a coin toss as random, our model is vastly more useful than it would be if we modelled the toss with realistic determinism (i.e. with all the physical forces, etc., that control the outcome of the toss). Indeed, statistics textbooks commonly model a coin toss as being random. Moreover, people have probably been treating a coin toss as random for as long as there have been coins.

To summarize, we model a coin toss as a random event with probability ½, even though we know that the model is physically unrealistic. This exemplifies a maxim of statistics: “all models are wrong, but some are useful”. The maxim seems to be accepted by all statisticians (as well as being intuitively clear). McKitrick, by appealing to a supposed lack of physical realism of non-stationary models, ignores that.

❧ *A draft of this Comment was sent to Ross McKitrick; McKitrick acknowledged receipt, but had nothing to say on the technical issues. *

**See also**

• | Is a line trending upward? |

Breusch T., Vahid F. (2011), “Global temperature trends”, *Econometrics and Business Statistics Working Papers* (Monash University), 4/11.

Burnham K.P., Anderson D.R. (2002), *Model Selection and Multimodel Inference* (Springer).

Cox D.R. (2006), *Principles of Statistical Inference* (Cambridge University Press).

Kaufmann R.K., Kauppi H., Mann M.L., Stock J.H. (2013), “Does temperature contain a stochastic trend: linking statistical results to physical mechanisms”, *Climatic Change*, 118: 729–743. doi: 10.1007/s10584-012-0683-2.

McKitrick R.R. (2014), “HAC-robust measurement of the duration of a trendless subsample in a global climate time series”, *Open Journal of Statistics*, 4: 527–535. doi: 10.4236/ojs.2014.47050.

Shumway R.H., Stoffer D.S. (2011), *Time Series Analysis and Its Applications* (Springer).

Strzalko J.,Grabski J., Stefanski A., Perlikowski P.,Kapitaniak T. (2010), “Understanding coin-tossing”, *Mathematical Intelligencer*, 32: 54–58. doi: 10.1007/s00283-010-9143-x.

Tong H. (1995), *Non-linear Time Series* (Oxford University Press).

Woodward W.A., Gray H.L. (1995), “Selecting a model for detecting the presence of a trend”, *Journal of Climate*, 8: 1929–1937. doi: 10.1175/1520-0442(1995)008<1929:SAMFDT>2.0.CO;2.

## Reader Comments (150)

Your example of the coin is not really correct - the forces exerted each time are random and thus the outcome of the toss is still random, unless we are talkng about the outcome after the toss has been made.

As for a line strecthcing past absolute zero, does that matter? If the calculations work to a pretty good approximation in the part of the line we are using, then it does not.

Sheesh, wake me up when someone manages to "wiggle" the atmospheric CO2 up and down several times and finds (or not) a correlated wiggle in temperatures.

At the moment I'm having a nightmare that the linearly rising sea level is causing my linear increase in weight, because these two rime series are correlated.

I think this is a bit of an over-reaction.

In any mathematical model, you have to make assumptions. Ideally, all the assumptions made will be a clearly stated in the paper, and their validity will be discussed. It is not necessary to demonstrate convincingly that all the assumptions made in the paper are valid. These ideals are rarely met in reality - for example, is it the case that every paper that talks about a trend of x degrees per decade contains a disclaimer that there is an assumption that the time-series can be represented as a linear model, and an acknowledgement that this model might not be accurate? Of course not! Does every paper on the results of climate models list all the assumptions made in the model and discuss their validity?

When choosing between different models, there's a lot to be said for picking the simplest one to start with, i.e. making the simplest possible assumptions, such as linearity and stationarity in the case of a statistical model.

OK, McKitrick's paper makes some assumptions, the validity of those assumptions can be debated and questioned, and the results will depend on these assumptions to some extent.

The climate system is nonlinear.Anybody who doubted that should reflect that it is often stated that "radiative forcing" is a logarithmic (ie nonlinear) function of CO₂ concentration.

And obviously, the physical behaviour of water changes greatly as the local temperature changes from < 0°C to > 0°C, so a very key component of the climate system is highly nonlinear.

However, as someone once said, all physical systems are nonlinear but often we only understand them by treating them as if they were linear. We don't invariably spell out that the assumption of linearity is just an assumption.

I'll look forward to reading McKitrick's paper and trying to form my own opinion as to whether assuming linearity is reasonable.

To summarize, we model a coin toss as a random event with probability ½, even though we know that the model is physically unrealistic. This exemplifies a maxim of statistics: “all models are wrong, but some are useful”. The maxim seems to be accepted by all statisticians (as well as being intuitively clear). McKitrick, by appealing to a supposed lack of physical realism of non-stationary models, ignores that.What in blue blazes is he trying to say? That a coin toss isn't really random, so the statistical model for it is unrealistic but still useful. Therefore by analogy the statistical models for the climate, even though they have been proven to be wrong are still useful? In fact, all statistical models are wrong but SOME are still useful. Ahh, but which of the wrong statistical models are useful? Ones that show cooling clearly are not useful, but models that show warming are useful. Yes, I get it, the warming models are very useful for generating grants and furthering the "cause", therefore they must be the scientifically correct models. Cooling models are not useful so therefore they must be scientifically incorrect. Prediction of physical measurements is irrelevant because all models are inherently wrong anyway.

Also strong criticisms of the McKitrick's paper on Richard Telford's site.

Has anyone noticed this extraordinary article in the Wall St Journal from last week from a believer:

http://online.wsj.com/articles/climate-science-is-not-settled-1411143565

"A time series is any series of measurements taken at regular time intervals. Examples include the following: the maximum temperature in London each day ..."

The maximum temperature does not occur at the same time each day. While the measurement may be taken at the same time each day (when the person leaves their shed with a pencil and notebook and looks at the min-max thermometer), the readings each day, for min and max temperatures, do not refer to the same time of day.

It is what is done, and it may be an example of a time series, but the description and example in the quote are at odds with each other!

"The techniques required to analyze time series are generally different from those required to analyze other types of data. The techniques are usually taught only in specialized statistics courses."

Time series analysis techniques are also taught outside statistics courses. They include Fourier Analysis and are used to improve signal to noise ratios in exploration seismics, medical ultrasound signals, the radio component of mobile phone communications, in fact, in digital data communications in general.

What makes temperature data different is that there is often missing or incorrect values in the series, often single, isolated, values, as that is the nature of things, and that creates a lot of very high frequency noise. While one value may be completely wrong, those either side are probably of very good quality. Telecommunication time series suffer this less, especially after much of it has been corrected using error correcting techniques, such as parity checking, that temperature data lack!

Keenan protests too much.

The fact that climate is the product of a nonlinear dynamic system does not mean that noise (random measurement error) in a climate measurement (here global estimated surface temperature anomaly) is also nonlinear, especially over a small slice of time like three decades.

Simpler OLS methods give similar results.

So does the MarK 1 analog eyeball.

"A difference that makes no difference is no difference at all." William James.

Without specific examples I can't be sure I understand Keenan's point. Here's what I take away from it:

McKitrick just ASSUMED the coin was honest (metaphorically.)

Yes, Keenan's right that McKitrick hasn't proved it, and that if it isn't true it has a BIG difference to the outcome of the figures.

Yet- it's hardly a damning rebuttal. The honesty of the coin is a plausible default condition. It's a rebuttable presumption, but Keenan hasn't actually rebutted that presumption. He's just saying 'You're making that assumption, prove it!'

If I'm totally missing the point I apologise- but if so, could someone explain just what Keenan's point IS?

- don't lose sight of the bottomline ie Is this provide any proper validated evidence that we are likely to be on a path to climate catastrophe on a timescale we can't cope with ? doesn't seem so

..Yes it could be yes could be no, but the odds seem heavily towards no based on past climate patterns.

Much thanks to His Eminence for posting this!

@ Tim, 2:34 PM

Regarding you first point—no: see the cited reference [Strzalko et al.]. Regarding your second point—yes: that is what Nullius in Verba was saying.

@ Paul Matthews, 3:04 PM

You say “In any mathematical model, you have to make assumptions.” My post says this: “The calculations of McKitrick [2014] rely on certain assumptions. In principle, that is fine: some assumptions must always be made, when attempting to analyze data.” Hence, you and I seem to be agreeing here.

You say “McKitrick's paper makes some assumptions, the validity of those assumptions can be debated and questioned, and the results will depend on these assumptions to some extent”. Agreed, and if McKitrick [2014] had been forthcoming about that, that would have been fine. My post says “The problem with McKitrick [2014] is that it relies on assumptions that are wholly unjustified—and, worse, not even explicitly stated”. In contrast, McKitrick claims that his results are “valid as long as the underlying series is trend stationary, which is the case for the data used herein”. The claim is false and seriously misleading. That is one of the main points in the post.

@ Robert Christopher, 3:44 PM

Regarding “maximum temperature does not occur at the same time each day”—good point! It would have been better for my post to say “the temperature at noon at London (Heathrow) each day”, or similar. I will amend the version on my web site.

Regarding signal processing techniques for time series, those are indeed taught outside of specialized statistics courses. Applying them is sometimes difficult, as you indicate. Signal-processing techniques tend to be less relevant, and greatly less used, for the sorts of questions that conventional time-series techniques are brought to bear on.

I can still remember advice, to the grad student class, from an excellent faculty member of my grad-studies committee:

And then, on a whim, he gave us a 'term-paper' based on the error made in a published paper we were discussing.

Inadvertently, I later found another use for assumptions: Always give a reviewer/referee at least one

minorfault they can find in your submission. An assumption that is provable with a trivial amount of extra work is enough, and they can feel they have their job adequately.[Actually I am not, and was not, that cynical. But that's what I learned about the process.]

This post is difficult to follow, but two points might help.

Trend stationary is way more general than stationary. The deterministic trend term can be any function of time - nonlinear, chaotic or whatever. It's only the random noise term that is constrained stationary.

Prof McKitrick is unlikely to think temperature is a stationary time series. Not least, because the tests in his paper may be read as rejecting stationarity after 16-26 years of temperatures run backwards (at 5% significance).

At first sight, McKitrick's paper seemed like dressing up the visually self-evident in rather fancy statistical clothing. However, the distress it has caused some believers (elsewhere) suggests it may have been worthwhile after all!

This must be an example of the 97% consensus of which the great John Cook speaks.

I understand the concept of a time series being "a sequence of data points, measured typically at successive points in time spaced at uniform time intervals"

(Wikipedia)but is a daily series of maximum temperatures not also a "time series"? The first example quoted by Wikipedia is the daily closing figure for the Dow Jones but is the daily peak figure not equally a time series? Once in each 24-hour period is also a "uniform time interval", no?This critique is much like all those 'medical' or environmental scares that tell us compound "X" has been proven to double the potential for cancer, heart disease or whatever declaring that the results are 'statistically significant'. In almost every case it turns out that the doubling, while truly a doubling and also 'statistically significant', is in the order of 1 in 10,000 being shoved into 2 in 10,000. In other words, while it may be 'statistically significant' it is basically irrelevant if not in reality meaningless.

The critique also has a nasty, underlying meanness to it that I personally find most offensive. I do not know what Mr Keenan is attempting here, whether it is to say that the observed pause is not real because the analysis is flawed, that the pause cannot be quantified statistically and thus could simply be an artifact that has no meaning and is thus not relevant to increasing temperatures, that McKitrick is being deliberately deceptive (as the tone of the critique implies), that McKitrick is incompetent and nothing he says should be listened to, or what. Mr Keenan sneers at what he sees as a lack of fairness and honesty on the part of McKitrick, one could ask the same of him.

Surely this isn't correct for a max/min thermometer

It is true that the thermometer is read and reset at the same time each day but theoretically the max (or min for that matter) could have occurred either 23:59:59 or 00:00:01 to the nearest second before the reading. It is more accurate to say that the value reading was taken at a different time each day.

I have no idea what this means, which is a problem I found trying to answer Doug's questions in general. What are the "relevant equations" for the noise? The HAC variance matrix is a nonparametric estimator valid for any autocorrelated process as long as it doesn't contain a unit root. You don't write out the equations for the noise process, only for the estimator, and I gave those in the paper. I haven't seen any discussion of what a "nonlinear" autocorrelated process would look like or how well the HAC estimator does in those cases but I know of no reason to assume that it becomes invalid, or for that matter why we would expect an autocorrelated process couldn't be well-represented by a combination of linear terms.

Doug seems to think I assume linearity of the climate system. Well no. What my paper does is ask whether a linear trend from year A to year B in a particular time series has a confidence interval that includes zero. I don't assume the climate system is linear, in all respects, for all time, infinitely in every direction. in fact by talking about the emergence of a trendless portion of a previously upward-sloping series I pretty much assume nonlinearity. A reader may be uninterested in the narrow question I pose, but should not assume that I am making a more general claim than the one I actually investigate.

The long run variance of an I(1) process is an expression involving the term 1/(1-rho^2) where rho goes to 1. Hence the expression goes to infinity. Look it up in any econometrics textbook.

Trend stationarity means stationary after de-trending. De-trending (do I need to explain this?) means making the trend perfectly horizontal.

But now let's cut to the chase and assume that Doug is right and global temperatures are nonstationary, in particular, they contain a unit root and are integrated of order 1, or I(1). That means there is no meaning to the term "trend", as the long run variance of any trend term is infinity. Then there has never been a trend and the "pause" is as old as the Earth, or alternatively, you would take it to mean that there is no reason to read a paper talking about measuring the duration of a trendless sub-interval of a temperature time series. I guess therefore that there is another assumption inherent in my paper, namely that the reader has chosen to read it.

@ Mike Jackson, SandyS

Yes, the post is technically correct; i.e. the series of maximum temperatures is indeed a time series, for the reason that you give. The issue that Robert Christopher raised is not about technical validity, but rather about didactic clarity: the example could have had a potential point of confusion removed, via just a small change.

@ Peter C

I hope that you will fairly weigh the evidence and logic that is presented in the post.

About a thousand years ago I studied maths at Oxford. Yet, I'm left feeling like I'm watching an argument between Lewis Carroll and Edward Lear about whether it is the wibble that is wobbling or the wobble that is wibbling. I could say much the same for almost every post on Climate Audit.

Let me know when the wibble wobblieness has been resolved.

James Evans

LOL !!

I studied maths in a hovel known as Glasgow University, graduating toward the middle or end of the Maunder Minimum, depending on which proxies you prioritise. I became a confirmed sceptic when Roger Pielke Snr replied to Gavin Schmidt, ''By God, sir, it's a camel' . Can't remember the question now.

Richie Rich: read the response thread at Telford's and take note of the comment I just posted. Telford has admitted that he generated a distribution using a looser definition of hiatus than the one I used, which exaggerates the size of the lower tail. But he hasn't posted the correct distribution, nor has he changed anything in his original, inaccurate post.

My posted comment reads:

Richard – OK so you have admitted that you generated a distribution based on a careless misreading of my hiatus definition that not only fails to correspond to the one I applied, but is inaccurate in a way that exaggerates the evidence for your conclusion. Because you permit interrupted hiatuses to count, your distribution will be too wide, and the lower tail is too large. Faced with this realization you should have immediately re-done your analysis and posted a correction. For all we know the observed hiatus may be in the bottom 1% tail of a distribution accurately generated–and since you haven’t corrected your inaccurate post we have no way of knowing.

But instead you leave your inaccurate post unchanged and resort to a ridiculous smear

In other words, you are saying that if the results had looked different than they did I would have cheated by weakening the definition. Well sir, you are the one who cheated by changing the definition without telling your readers. Real classy joint you run here.

@ “Another example is the average global temperature each year.”

An annual approach may be stupid in regions with big difference between sunshine during the season. Somewhere I read an analysis from A.J. Drummond, A. J., 1943, “Cold winters at Kew Observatory, 1783-1942”, QJoRMetSoc, p17f, saying: “The present century has been marked by such a widespread tendency towards mild winters that the ‘old-fashioned winters’, of which one had heard so much, seemed to have gone forever. The sudden arrival at the end of 1939 of what was to be the beginning of a series of cold winters was therefore all the more surprising. Never since the winters of 1878/79, 1879/80 and 1880/81 have there been three in succession so severe as those of 1939/40, 1940/41 and 1941/42.”

Keenan is trying to muddy the waters with waffle.

He doesn't have a point to make, you can assume the trend will be up, down or flat but only one of those three outcomes can happen, so Keenan is effectively waffling.

I don't buy Doug Keenan's clear-cut division between things that are random and things that are deterministic.

I can't find the reference but I remember reading Feynman's argument that, even if the random effects of quantum mechanics were not there to make precise measurement of physical quantities impossible, we could still not calculate the future state of things, no matter how precise our measurements. Essentially because of chaotic effects kicking in.

For me 'random' simply means 'unpredictable'. The impossibility of measuring with complete precision, not only the position and velocity of every part of the tossed coin, the precise details of the air turbulences along its path and all the other unmeasurable details involved in its flight, the difficulties of simulating its interactions with viscous, compressible fluid, not to forget its landing on a surface with its own physical characteristics, mean that the coin toss is modelled as realistically as a random process as it is as a determinstic one. Inherently, neither representation can capture the physical reality in every detail but to claim one as correct and one as wrong is...wrong.

Like another commenter, I find the hectoring tone of Doug Heenan's posting unpleasant.

"That a coin toss isn't really random":

Of course it isn't random. For practical reasons we merely act as if it is. There's an enormous gap between the non-random forces to be considered in determining the trajectory of a coin and the truly random events which occur at a sub-microscopic level.

Quantum physics came after Laplace so he was wrong about the 'tiniest atom'. The rest of his comment below certainly applies to a tossed coin,

"We may regard the present state of the universe (and our coin) as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes."

Pierre Simon Laplace, A Philosophical Essay on Probabilities

I think Keenan's scrutiny is awesome. If only 10% of it were to be applied in climate science, the field would do a 180 in a matter of weeks.

Laplace: "... for such an intellect nothing would be uncertain ..." except if we really do have free will. Not quite off topic - it's those darned assumptions again.

Two fleas arguing about who owns the dog.

"But now let's cut to the chase and assume that Doug is right and global temperatures are nonstationary, in particular, they contain a unit root and are integrated of order 1, or I(1). That means there is no meaning to the term "trend", as the long run variance of any trend term is infinity."There could still be a meaning to the term "trend". If you consider the distribution of X(t + dt) - X(t) for constant dt, you can get a finite mean and variance (although in general it doesn't have to be). If the mean is non-zero and increases proportionately with dt, then the mean/dt is a constant and can be considered the 'trend'. If an I(1) process has mean zero, an I(1)+trend process has a well-defined non-zero trend. (Which is not to say it's easy to measure!)

I haven't looked at the details of McKittrick's paper closely enough to comment on the main issue. I would assume that the trend-stationarity assumption is for the behaviour of the infinite time series from which we have taken a finite sample. It's commonly the case that a short enough segment from a trend-stationary series looks and acts as if it is non-stationary, which is what we're really talking about here. The question therefore is how the test carries over to such short segments taken from a trend-stationary series; too short to exclude the unit roots from the confidence intervals. I would think that if they behaved any differently from a non-stationary series, that would potentially provide a more powerful test of non-stationarity than I think is possible, so I suspect Doug has a point.

Although I'd also say it was a minor one, and easily fixed. The thing to do would be to perform a relevant test for stationarity on the series - the trick being to show that passing the test implies the assumptions actually required for McKittrick's test to give valid results. As for the 'hectoring tone', we routinely use a far worse tone criticising the opposition, and expect them to take it. Maybe I'd not have put it quite that way, but there's nothing wrong with us being able to criticise one another, and to take criticism that perhaps isn't expressed quite the way it was meant. It's more important for us to get the right answer, and I'd like to think we're all thick-skinned enough to ignore any issues with the 'tone' in which arguments are put along the way to achieving that.

It's getting the science right that matters to us, isn't it?

I think Ross's reply pretty much shows Keenan making a heroic defense of the consensus by attempting to confuse the issues.

The junk heap of failed AGW apologia grows unabated.

Perhaps the goal is to get the pile so high that people will forget why the junk pile exists?

@ Ross McKitrick, 6:49 PM

Taking your points in turn....

1. If you “have no idea” what a nonlinear time series is, then perhaps you should consult books on the subject. Textbooks include

Non-linear Time Seriesby Tong [1995],Nonlinear Time Seriesby Fan & Yao [2003], andNonlinear Time Seriesby Gao [2007]. (I pick those because I have copies of them.)2. I did not claim that you assumed linearity of the climate system. Rather, I stated that your paper assumed that the global-temperature series is linear, and I pointed out that your paper gave no justification for that assumption. Since the climate system is nonlinear, some justification for your paper’s assumption of linearity obviously needs to be given. You seem to be engaging in rhetoric here.

3. An I(1) process has a variance that reaches infinity only in infinite time. You were giving an objection based on “physical” considerations. We do not have infinite time in the physical world.

4. Your paper fits a straight line to the temperature series (and assumes that the residuals from the fit are from a linear Gaussian time series). Your paper does not assume that the straight line is horizontal, obviously. You seem to be confused here.

5. Your claim is invalid: a non-stationary series can have a trend—a stochastic trend. Such trends are discussed even in introductory textbooks, e.g. Shumway & Stoffer (cited in my post). For a related online note about this issue, see

http://robjhyndman.com/hyndsight/arima-trends/

This is far outside my territory, but I can at least read the words.

Obviously, McKitrick does not believe the system to be linear and I don't think Keenan or a reasonable reader believes that he does so.

It seems to me that Keenan's criticism is that McKitrick should have more clearly stated that his assumption of linearity in reality is an approximation considered by McKitrick to be good enough to reach an approximately correct answer.

Maybe McKitrick's assumption should have been tested in order to demonstrate the validity of the assumption and thus bolster the weight of the conclusion? Correct?

Or maybe it is self evident that given the short period in question the assumption or approximation will not in any significant way invalidate the conclusion?

One thing is certain, we do not have the means to reach the objective true conclusion, but among the different approaches to choose from, some may be more precise than others.

Steveta-uk nailed it.

"A difference that makes no difference is no difference at all."

Simplifying assumptions are permitted when the message of the derived observation would not change under more rigour or formalism.

It has always heen thus, in order to make progress.

......

That said, lack of formaalism is causing some differences that matter in Flimate research.

One example is the home-made PCA variation behind the hocnkey stick.

More generally, fewer modern authors abide the old maxim in stats, "First understand your distribution."

Outside stats, into ocean pH, many authors seem not to know that pH is NOT the negative log of the hydrogen ion concentration. (Activity, not concentration, is the correct formal. The difference does matter with the ionic strength being high).

One byproduct of age and experience is worry about "Near enough is good enough".

However criticism of McKitrick is unwarranted because he is correct for the simple purpose and nobody has demonstrated to the contrary. It is somewhat like arguing if a normal or a Poisson distribution applies to some noisy data , when the data are so noisy that either assumption suffices to show an approximate, practically useful outcome.

The assumptions McKitrick makes are those common to climate science. It is entirely usual to accept most of the assumptions of the theory you are seeking to refute. It is ridiculous to criticise him for doing this and not proving those assumptions.

Linearity is a ubiquitous assumption in climate science. You can't have sensitivity without it. And trend stationary is also a common thing to assume. In fact McKitrick is being generous here as many assert that warming is accelerating which would be in even greater conflict with observation.

In short this is a ridiculous beat up attempting to put McKitrick's paper through an obstacle course which no other paper in climate science has had to face. the objections it raises are specious. Keenan demands that McKitrick justify the common assumptions of the entire field of climate science in a manner which no-one else in that field has ever been asked to do. These are not McKitrick's assumptions, they are those of the field and underpin the models he is seeking to test. I find the criticism to be completely over the top and ridiculous.

If Keenan really thinks assuming linearity is a problem I expect him to write loud critical papers of virtually every paper in the field. He should be shouting loudly that the IPCC report is based on unjustified assumptions and its conclusions lack merit. The fact that he isn't doing that shows just what a silly piece of political posturing this is.

And it isn't even peer reviewed!

Ian, Doug has written critiques of other climate science papers along these lines.

Ian H, I said the same thing but more obliquely upthread. And Keenan did not repond then either.

Criticism fail either way. Regards.

Here is a 2011 paper that makes some concepts easier to comprehend -

http://www.buseco.monash.edu.au/ebs/pubs/wpapers/2011/wp4-11.pdf

It is specifically about temperature/time series analysis.

The authors:

"We conclude that there is sufficient statistical evidence in the temperature data

of the past 130-160 years to conclude that global average temperatures have been

on a warming trend. The evidence of a warming trend is present in all three

of the temperature series. Although we have used unit roots and linear trends

as a coordinate system to approximate the high persistence and the drift in the

data in order to answer the questions, we do not claim that we have uncovered the

nature of the trend in the temperature data. There are many mechanisms that can

generate trends and linear trends are only a first order approximation (see Granger

1988). It is impossible to uncover detailed trend patterns from such temperature

records without corroborating data from other sources and close knowledge of the

underlying climate system."

If only the climate hypesters were subject to anything close to this level of scrutiny.

Mumbo jumbo statistical salad made to keep the true believers on the farm in a flurry of jargon that psychologically erases the pause, letting blissful doomsday stick around, making use skeptics bad people for denying the future its great fire.

Nik,

Keenan is no warmist apologist. I have read his site and an excellent paper he wrote taking down the IPCC.

I for one am going to sit quietly and let those who can confidently dig deeply into this have at it.

Doug, your reply #1 'if you "have no idea" what a nonlinear time series is" ... What? He said nothing of the sort. He said, quite reasonably, that he has no idea what "relevant equations (for the noise) are linear" means, quite different from your reply. Ross then asks "what relevant equations" and proceeds to explain what he did in fair detail, which you completely skipped in your response.

If you can't even respond directly in your first bullet point, it is difficult to take the others seriously. You seem to either not understand what Ross has done, or simply cannot figure out how to explain why it is so wrong. Either way, your comments do not seem to really address what he has stated.

Mark

Here is a controversy that was forgotten very quickly and the lie machine rumbled on unhindered.

http://www.independent.co.uk/environment/climate-change/climategate-scientist-hid-flaws-in-data-say-sceptics-1886487.html

But the new allegations go beyond refusing FOI requests and concern data that Professor Jones and other scientists have used to support a record of recent world temperatures that shows an upward trend.

Climate sceptics have suggested that some of the higher readings may be due not to a warmer atmosphere, but to the so-called “urban heat island effect”, where cities become reservoirs of heat and are warmer than the surrounding countryside, especially during the night hours.

Professor Jones and a colleague, Professor Wei-Chyung Wang of the State University of New York at Albany suggested in an influential 1990 paper in the journal Nature that the urban heat island effect was minimal – and cited as supporting evidence a long series of temperature measurements from Chinese weather stations, half in the countryside and half in cities, supplied by Professor Wei-Chyung. The Nature paper was used as evidence in the most recent report of the UN’s Intergovernmental Panel on Climate Change.

See also

Climategate intensifies: Jones and Wang apparently hid Chinese station data issues

http://wattsupwiththat.com/2010/02/01/climategate-intensifies-jones-and-wang-hid-chinese-station-data-issues/

Sep 29, 2014 at 4:38 PM | michael hart

Classic La Rochefoucauld: "We only confess our little faults to persuade people that we have no big ones."

Sep 30, 2014 at 2:31 AM | Ian H

+1

Doug should include a disclaimer like the following.

The calculations in this work rely on assumptions: about linearity and trend stationarity (and normality). Those assumptions are unjustified and might well be unjustifiable. Relying on different assumptions might well lead to conclusions that are very different from the conclusions of this work. Hence, the conclusions of this work should be regarded as highly tentative.

That effectively consigns the whole thing to the garbage.

After all, what's fair for one should be fair or all.

OT but we are heading for the driest September since records began. This is due to a meandering jet stream; the same root cause as the non-record wetness of last year that was blamed on global warming by far too many opportunistic journalists, politicians, environmentalists and even some ignorant or lying 'scientists'.

This is an academic dispute between a member of the financial community and one from the econometrics community. Both communities of such 'experts' not only failed en-masse to predict the recent financial collapse but undoubtedly also made the situation much worse by over-belief in models with over-simplifications of complex systems and collective deafness to common sense. Alas, the climate system is even harder to predict than the stock market.

In reality these types of disputes about what assumptions to make in an uncertain world are extremely common for the simple reasons that a) the equations we know about cannot be used without gross simplifications that everyone knows are invalid and b) there is a lot we still don't know that we are forced to ignore. Hence we can only identify which assumptions are least bad by testing, testing and testing again, and the more the variables, the trickier it becomes.

This unfortunate reality, by the way, applies also to the maths and physics within climate models so those who declare such models are based on the underlying physics and can thus be can be relied upon (even when the fail the essential tests) are either grossly ignorant or out-and-out charlatans.

It is important to hold to principles for research conduct, regardless of which side of the global-warming debate someone is on. Ross McKitrick has published a paper that deliberately and substantially misleads. Some people want to defend that. There is no valid defense.

It is true that there are researchers on the alarmist side of the debate that have used methods similar to those of McKitrick’s paper. The IPCC, in particular, uses related methods. I have written a critique of the statistical analyses in the most-recent IPCC Assessment Report (AR5). The critique concludes that all the statistical analyses are unfounded, for reasons similar to those that make the analysis in McKitrick’s paper unfounded: the methods are unjustified, and seemingly unjustifiable.

Lord Donoughue submitted the critique to the U.K. Department of Energy and Climate Change. That led to a meeting at the Department, with the Under Secretary of State and the Chief Scientific Adviser, among others. Twelve days later, on January 21st, the U.K. government announced, in Parliament, that it would no longer rely on observational evidence for global warming [HL4497]. This is a huge advance for global-warming skepticism, and it was my statistical critique that undergirded it.

McKitrick’s paper relies on a method that he knows seems to be unjustifiable. It went beyond that too. His paper claims that the method used is “valid as long as the underlying series is trend stationary,

which is the case for the data used herein” (emphasis added). The emphasized part is stated as though it were an established fact. It is not a fact: it is an unjustified assumption, and it is disputed by many researchers—and McKitrick knows all this. The claim is thus substantially misleading, and that misleadingness must have been deliberate.Any author who deliberately and substantially misleads deserves strong censure, regardless of which side of the global-warming debate they are on. If an author on the alarmist side did something like what McKitrick did, consider how you would you respond to that.

Lastly, I have previously filed formal allegations of fraud against researchers on the alarmist side. Such allegations have received attention: reports in hundreds of newspapers; interviews on BBC Newsnight and BBC World Service; a report by CBS News; discussion by a U.K Parliamentary committee; citation on the floor of the U.S. Senate; etc.