(with two clickable links added on 12. December 2011)
On 31 August 2011 Fred Singer gave a lecture at the Royal Netherlands Meteorological Institute (KNMI). This lecture was followed by a discussion on two propositions, which had been proposed to Dr Singer beforehand (on 25 July 2011; see schedule):
- A Climate scientists must communicate uncertainties and their consequences.
- B None of current climate models overcome chaotic uncertainty
Proposition B was one of the conclusions of Dr. Singer in his lecture. In essence, Sybren Drijfhout argued that this proposition was incorrect, because:
- It was based on a case study which did not allow generalization.
- KNMI had made runs with a ‘current climate model’ which actually did overcome ‘chaotic uncertainty’ (i.e. noise due to variability).
On 17 October 2011 I initiated an e-mail exchange, hoping to arrive at a joint statement. Initially there was some encouraging convergence. However, the final mails in this exchange, in December 2011, made further convergence unlikely.
I believe it is important that I present my conclusions:
- Both Singer and van den Hurk endorse proposition A.
- Drijfhout refuted Singer’s conclusion (proposition B). Singer’s reaction is inadequate.
(Adapted with permission from
http://home.kpn.nl/g.j.komen/singer-knmi-discussion.pdf)
20 comments:
Was ich bemerkt habe, ist, dass Professor Singer wiederholt nach den 17 OLS trends gefragt hat und diese nicht bekam. Mir persönlich erscheint hier eine klare Bias gegen Herrn Singer erkennbar zu sein.
"That's why I keep asking for the 17 OLS trends for the interval 2010 to 2050 [This is the fourth time that I have asked for this info]"
Pascal
I understand it this way: Singer got access to the KNMI runs and could calculate whatever he wants, but there were no volunteers to do the job for him. From the one diagram shown by Gerbrand Komen (the red band), one can see that all trends (in whatever way they may be estimated from the data) were positive and markedly non-zero. The probability of observing 17 positive trends in 17 sample sections of 40 years length, when there is actually no trend, is 1/17, a rather small value. Thus, the null-hypothesis that the model running with the specific GHG scenario actually generates no trend, is rejected with a risk of 1/17.
Thus, if there was a bias against Singer, this was no evidence for such a bias. What I see is a clear unwillingness on Singer's side to agree on a procedure to sort out technical issues.
OLS means ordinary least square, a standard approach of fitting trend lines (good old Gauss).
"The probability of observing 17 positive trends in 17 sample sections of 40 years length, when there is actually no trend, is 1/17, a rather small value."
Isn't that rather 1/(2^17)?
True, flamme. You are right. My error.
Hans
I am rather disappointed in the exchange - I would rather have liked to see it continue.
In a few places the tone is patronising to Singer - I don't see the same in Singer's responses.
There is a dispute about obtaining data, the exact nature of which isn't obvious to me.
In the letters Gerbrand Komen says, "KNMI has indicated how you can retrieve these data, but you need someone to help with this, and so far no one has volunteered".
Singer then says, "[the detailed info from Essence] is not on the internet but in the archives of the Project".
Komen: "PS To access the Essence data I believe Wilco Hazeleger's guidance should be adequate."
Meanwhile, Singer tentatively concludes, "...I find that if you use longer runs, then you get 'convergence' of the cumulative trend (ensemble-mean) with fewer runs". Drijfhout apparently agrees with this.
Komen is then determined to press on with a joint statement despite conceding "In your lecture you were referring to a specific simulation (MRI, IPCC 20CEN), and you were not aware of ESSENCE." He wants to present this despite that Singer obviously hasn't had time to understand the implications of this new data.
Singer: "Drijfhout also claims that if the forcing is strong. I need to see proof before I can agree."
By Dec 4 after a silence Singer reiterates, "I ... was counting on Drijfhout to back up his claim that the 'spread' in trend values depended on the level of forcing."
Singer claims to have disproved this in the meantime, says he is quite confident in his results, and asks for one of Drijfhout's control runs.
Finally, Komen says Singer has "start[ed] all over again" and is "repeating [him]self". I don't see the repetition. It looks like they are trying to gently persuade Singer to assent to a position he disagrees with. Then Komen declares, end of conversation.
I really wish this conversation could continue - it certainly strikes me as completely unwarranted to conclude from all this, "Drijfhout refuted Singer’s conclusion (proposition B)" because "Singer’s reaction is inadequate".
Hans #2,
You said, "From the one diagram shown by Gerbrand Komen (the red band), one can see that all trends (in whatever way they may be estimated from the data) were positive and markedly non-zero".
Is this based on a personal communication or the email discussion? If the latter, Komen says this about the diagram you refer to:
"...I searched a little bit to satisfy my own curiosity, and I found this plot in which different runs are superimposed".
So how do we know that the plot is a plot of all 27 runs and not a selection? Komen doesn't say or seem to know exactly what it is that he has found.
Independently of the miss-communication about how to access the data, I cannot see the validity of Singers arguments. He seems to be arguing that the power of a test to detect a trend increases with the length of the simulations, but he doubts that the magnitude of the forcing causing the trend plays any role. If I understood properly I think his final goal is to cast doubt on the capability to detect a trend because the 'climate is chaotic' and models do not capture this chaotic behavior.
These arguments are however clearly wrong. If the forcing causing the trend is strong, it will be obviously easier to detect this trend above the noise caused by natural variations. I do not understand why he has to ponder over this question. Also, that a strong enough forcing can overcome natural variability is very clear in the observations: every year we see that summer is warmer than the following winter. The forcing here is strong enough to be clearly detectable although the climate is chaotic. So this is nor a 'question of principle', it is a question of the relative magnitudes of the forcings and of the natural variations and of the characteristics of the nature variability. If Singer#s argument is to highlight an intrinsic incapability of detecting trends in the presence of chaotic behavior, then he is wrong.
@ Eduardo
Honestly it's hard to understand (and to refute) Singer because of his moving goal posts. Refute one argument and you have to learn that you "misunderstood" his former positions.
Andreas
Alex Harvey 5
Could you please indicate which places you found patronising? I really try to avoid this. So may be I can learn. I believe the only basis for a good discussion is mutual respect.
Something else: plots of the 17 individual Essence runs had been sent to Singer, as well as straightforward instructions of how to access the digital data.
Singer probably misses the point of his own argument?
We know that there's at least one model that is right - earth's climate system itself.
It is agreed upon (or so it seems) that it is chaotic and its initial conditions and state are never known exactly.
But above all we know that this true model will just perform one single 21. century run ever - a sample size of just one to validate all other (computer) models against.
That only makes sense under the premise that the true model is mainly gouverned by determinism and shows little chaos - so the single run available can be assumed to be a very representative one. However that premise rules out computer models showing considerable variability (as seems to be the case with some at least). So under that premise the need for more runs alone (because the results aren't stable) rules out that model.
Sybren Drijfhout gave me permission to give a link to his original presentation: http://bit.ly/sT5nHm, as used on 31 August for the discussion. This very document was made avalaible to Fred Singer. Slides 5 - 22 are graphich representation of th eindividual runs.
Gerbrand,
although I'm not in support of Singer's claims at all it's ridiculous to base a scientific dispute upon such a presentation.
It doesn't even mention the model's name, nor the baseline for comparison with the instrumental record, nor what instrumental record it was compaired against, nor how well it fitted the 19. century record, nor any absolute temperature values (we know some models are off by some K from reality wich IMO rules them out). This is completely unscientific.
So is there a 17 x 70(+?) numerical table available with _absolute_ globally averaged temperatures from these model runs? I guess not.
wflamme #12
Let me clarify. In the discussion on 31 August Drijfhout had 5 minutes to present his case (The schedule is still here: http://bit.ly/vEtmSA).
Full references were given to Singer afterwards in my e-mails. The detailed references are there, as well as the name of the model used in Essence: ECHAM5/MPI-OM
As far as I understand the topics you raise were not discussed.
Singer argued that you need many model runs for the significant detection of a trend in a model-generated time series. Drijfhout argued 'not that many'.
No comparison with observations. No statements about the 'real world''..
As I said he's not defending his case well. Since there's no 'best' model one can always cast doubt upon the validity of every model even if there are 'enough' model runs available.
Apparently Singer actually believes/believed the warming trends seen in all the runs were obtained just by chance - but he didn't do the proper math of how unlikely that was given the null hypothesis of no trend.
Gerbrand Komen #9, my apologies.
I have re-read the exchange and I find it is probably not "patronising" in those few places as I thought it was on my first reading. Certainly, it appears Singer was irritated when he was told that Drijfhout's argument was based on "elementary statistical theory". Upon reflection, I am not sure that there is anything impolite about asserting that the argument is based on elementary statistics - although I can imagine if you've been around as long as Singer you could be irritated by this sort of thing. Of course, Singer was probably more upset that he was misquoted, but I am sure that was unintentional. I also recalled feeling Singer was likely to be irritated by the assertions that the models were reproducing natural variability so well - the comment "Amazing!" - although on reading it the second time I didn't have the same impression.
So I would withdraw my remark that the tone of the discussion was a significant part of the problem in the discussion. I still suspect that Singer genuinely was frustrated not to have data he wanted and I also still maintain that the discussion seems to have been terminated unnecessarily.
For what it's worth, I found the discussion very interesting. I personally find these sorts of discussions very helpful to evaluate which of the dissenting scientists might have valid objections that need to be discussed and which ones don't. I wish there were more of these discussions available.
In this case, it appeared that the discussion was progressing well. Both sides had conceded some misunderstandings in a few places. And Singer had modified his position accordingly. I guess, the problem is you wanted a joint statement whereas Singer appears somewhat uninterested in the joint statement (less interested in the points of agreement) and more interested in resolving the scientific dispute (or perhaps more interested in proving Drijfhout wrong).
I have included into the body of the thread two clickable links, on the agenda of the KNMI meeting, and on Sybren Drijfhouts presentation.
Btw:
Singer's paper is available here:
http://www.sepp.org/science_papers/ICCC_Booklet_2011_FINAL.pdf
I enjoyed my Aug 31 lecture at KNMI and appreciate the way it was received. It covered several topics, including Chaoticity of Models. I also thought the discussion was handled in a courteous manner and I appreciate the effort by Gerbrand Komen to arrive at an agreed position.
However, I am extremely frustrated by the fact that I have not been able to get the information that I need in order to respond properly to his questions. As you can see, I responded many times but without any success in getting the information. Also, Komen does not give details about the background to this discussion.
What is the background? In my lecture I showed the published results of an MRI (Japan) model, giving the results of 5 successive runs, each 20 years long, of the same model. They also supply the OLS trends calculated for each of the runs. They are as follows (in degC/dec):
0.042, 0.348, 0.277, 0.362, 0.371
They also publish the ensemble-mean of the trends of the 5 runs as follows: 0.280
As you can see, the individual trends differ by almost an order of magnitude. If they had made a 6th run, they would have gotten yet another trend value and a somewhat different ensemble-mean. Conversely, if they had stopped after their first run they would have gotten an e-m of 0.042, which is vastly different from their published ensemble-mean of 0.280
I then ask the obvious question: how many runs are required before the ensemble mean converges to a final value? NB: The IPCC 20CEN compilation had 22 models; 11 had only 1 or 2 runs, and the rest had no more than 5 runs.
What I did: I was able to get an unforced control run (i.e. no change in forcing) of length 1,000 years. I divided it into 25 segments of 40 years each, and obtained a set of 25 trend values, all different. I found, empirically, that the trend ensemble-mean converged to zero (as it should) after 10 runs. I have tried to publish this result to make modelers and others aware of the chaoticity of climate models and to point out that ensemble means based on a single run are not of great value. I also pointed out that the common method of averaging the ensemble means of the available models, without regard to the number of runs of each model, is not a good procedure since it gives undue weight to models with only 1 or 2 runs.
What I found at KNMI: Thanks to my exposure at KNMI, I learned that they had run the Essence Experiment with 17 runs. Finally, I thought, here was a way of checking my results, using a real climate model with increasing forcings, rather than a control run without an increase.
What I tried to get: I asked for the 17 trend values obtained from the 17 runs, using the 40-yr period of 2010 to 2050. I think it would be very interesting to see if the ensemble mean again converges to an asymptotic value after 10 runs. I would be happy to see KNMI publish such a result as an important contribution to the work on climate modeling.
I am still waiting for a positive reply.
Only today did I notice the reaction placed by Fred Singer on December 12.
I am glad he enjoyed his visit at KNMI, and I am grateful for his patience during our e-mail exchange.
At KNMI we discussed the proposition “None of current climate models overcome chaotic uncertainty”, one of the conclusions of Dr Singer in his lecture. This proposition was refuted by Dr. Drijfhout who presented an ensemble of 17 climate model runs in which the trends were clearly much larger than the noise.
Professor Singer now writes that he was frustrated by not being able to get the information that he needed for a proper response. This surprises me since he was given access to the model data.
Professor Singer ends his comment of 12 December by asking for the 17 trend values obtained from the 17 runs, using the 40-yr period of 2010 to 2050. I checked with KNMI, and Andreas Sterl was kind enough to provide the following table (see also this figure):
1 0.3267
2 0.3312
3 0.3170
4 0.3098
5 0.2994
6 0.3053
7 0.3069
8 0.3128
9 0.3144
10 0.3124
11 0.3140
12 0.3151
13 0.3129
14 0.3112
15 0.3108
16 0.3100
17 0.3115
The numbers in the second column give the trend in the global mean temperature (in degree K/10year) over the period 2010-2050. The first column specifies the number of runs that have been used to compute this trend. So "1 0.3267" means that the trend of member 1 is 0.3267 Kelvin/yr; "2 0.3312" means that the trend of the average of member 1 and member 2 yields 0.3312 Kelvin/yr, etc to "17 0.3115" giving the trend of the average of all 17 members. I hope that this takes away (some of) the frustration of Prof Singer.
Correction: Singers comment was dated December 13, not December 12.
Addition: Andreas Sterl informed me that the ESSENCE data are publicly available on the KNMI Climate Explorer -> Monthly CMIP3+ scenario runs ) (menu on the right) -> ESSENCE (scroll down about 2/3 of the page).
Post a Comment (pop-up window,non-moderated)