Thursday, December 24, 2009

detection and attribution

There have been numerous comments and inquiries about this statement of Myles Allen and myself: in our nature-online piece (no longer freely available): "The e-mails do not prove, or even suggest, that the main product of CRU, namely the record of global surface air temperature based on thermometer readings, has been compromised. Indeed, the thermometer-based temperature record has been verified by results from other groups." I want no to take the opportunity to explain my arguments. These are my arguments.

However, in the public, and also on this blog, such doubts are now raised. I do not think that they are warranted – even if some questions on technical issues (related to the homogeneity of sub-data sets and their corrections) may need some additional analysis (a process normal in science). But given, these doubts, a re-analysis by an independent group is in any case required – to further demonstrate the validity of this product.

Personally, I am convinced, insofar as is possible in an empirical science, that anthropogenic climate change is taking place and will emerge more strongly in the future. For explanation, a few comments are needed:

1) The assessment that elevated greenhouse gas concentration contributes to most of the recent warming since, say 1970, is made up of two steps, a "detection" step and a "attribution" step". Both steps operate under some assumptions – and the assessment to what extend these assumptions are valid or not, is to some extent subjective.

2) The detection step reveals that the warming trend extending across the recent few decades is more rapid than warming or cooling trends what would be expected from internal variability alone (from phenomena such as El Nino, the Pacific Decadal Oscillation and so on). The statement is not that the present level of warmth is unprecedented, even though it may very well be, but that the speed of warming is remarkable. The description of the warming in recent decades ("the signal") is based on thermometer data, including the CRU data. Even if this data may not be perfect, the description of the recent warming is robust. The "detection" is based on a rigorous statistical analysis, but depends on our understanding about the natural variability. The latter, the level of natural variability, is estimated from the thermometer-based temperature record, and from long climate model simulations.

That the data base is really good enough for estimating the range of internal variability can not rigorously be demonstrated. However, given the quality of our climate models in reproducing various aspects of the global climate and its change, and the consistency of model and thermometer-based large-scale temperature variability, I am confident that our present estimate of internal variability, derived from thermometer-data and long control runs, is about realistic. But, while I am unable to prove positively that my estimate is correct, any doubt will essentially be based on a general gut feeling. Only time will eventually help us to overcome this remaining, unavoidable uncertainty.

3) Attributing observed temperature variations to specific causes relies more on climate models, as they are needed to discriminate between the response of the climate system to different ’drivers‘, such as solar activity, greenhouse gases and volcanoes. It turns out that the best, and really the only, satisfactory explanation of the history of surface air temperature change particularly over the last few decades is obtained when the warming influence of anthropogenic greenhouse gases is taken into account. These gases are behind most of the recent decades' warming.

This attribution step is more uncertain that the detection step, because it relies on the skill of present day climate models to describe the large-scale response of the climate system to various external factors. Most climate scientists find the evidence that the models do a reasonable job sufficient, but chances remain for future revisions. In principle it may even be that there are external factors, we do not know of.

In Summary: Most climate scientists are convinced that a warming is going on, which can not be explained by internal dynamics (detection). The best explanation for this is –given knowledge gathered and re-examined during many years of research – the effect of elevated greenhouse gas concentrations (attribution). In case of detection, the uncertainty is mostly of statistical nature, while in case of attribution it is also of epistemic nature.

... Now, when I say, "most scientists", we should ask Dennis Bray for a quantification.


Charlie Martin said...

Dr von Storch, one of the things that came out of the Climategate Files, and some of the various examinations of data following, is that corrections applied during homogenization of the data often appear to contribute most or all of the warming signal for the last 50-100 years. Examples observed have included the effects of sites selected, as in the Russian data, mysterious step functions as in the "Darwin Zero" data, and straight out unexplained corrections as observed by Keen.

Given that, and the inner clique's other actions, it's hard for an outsider to be confident of the validity of the post-homogenization data.

Hans von Storch said...

@Charlie Martin, and others: I fully understand and accept you finding it difficult to believe in the "trust us, the thermometer record is ok" of mine. But please allow me to have this confidence, which does not mean that there are not some errors in the data set, which - in my view - do not compromise the overall description of global mean air temperature.

Therefore I favour a re-analysis done by an independent institution, e.g., the Climate Service Center in Hamburg (Guy Brasseur), the Dutch, the Czech, the Finnish or the South African Weather Service - just to mention a few.

Anonymous said...

Dr von Storch. The doubts that I have about the temperature records relate to the disclosure of many adjustments downward in the first part of the 20th C, selection of stations that show warming and rejection of stations that do not, and in particular a lack of adjustment for delta UHI effects.

We see unexplained changes in the population of temperature stations used, and in particular a reduction over the past few years.

Further, we have the revealing information from Anthony Watts about the poor quality of many temperature stations compared with the required standards.

All this has been amply discussed at various blogs (on the one hand) and poorly explained in the 'peer reviewed' literature.

Most compelling to me was the work of John Daly who showed the record for a temperature station each day - "not much global warming here".

Given that we clearly cannot trust the temperature record as presented to us by CRU and GISS, it seems a stretch to say that "the temperature record demonstrates that the warming is caused by anthropogenic factors."

To the extent that we see warming, it seems likely to me that it could be due to greenhouse gas emissions, land-use factors, and natural factors (some of which may or may not have been identified).

The CRU e:mails affair has confirmed the worst doubts of sceptics relating to climate science, and I for one do not trust the climate scientists who have chased headlines in recent years with literally thousands of 'OMG. Its worse than we thought' stories. Helped along of course by a complicit, uninformed and unquestioning media.

For the record, I am not opposed to taking intelligent remedial action provided that the case is made properly, and independent due diligence confirms that the analyses and reports are sound. We are nowhere near that situation though.

Finally, re models. I have deep experience of developing and running financial models. Many years ago we learned to use Monte Carlo simulation applying uncertainty factors to key inputs. Most of the time, the resulting output showed flat kurtosis which says that there is a more or less equal probability of any outcome.

It would be interesting indeed to see Monte Carlo analysis applied to climate models. There is a marvellous program called @Risk that can be used to do this work.

TCO said...

HvS: Great post. Would think it is obvious to people that your impressions are based on a wealth of personal reading and work, but that are in some sense subjective and integrative...but are still your "betting Bayesian" belief and a remarkably informed one. That you had to explain this to the hoi polloi is sad, but not surprising. There are really a rare few, who can disaggregate sides and issues. You, Zorita, perhaps Huybers are some that I feel would be Roman judges about finding insight wherever it took you. I find in contrast the behavior (especially in publishing only certain things) of Mann, McI etc to be less genuinely curious and more towards supporting pre-existing beliefs than really testing hypotheses. There are a few posters that I feel are genuinely curious and willing to look at evidence either way also. But it is well under 10% of the audiences at the blogs (Steven Mosher, JohnV, Lazar).