Wozu braucht Klimaforschung Geisteswissenschaften?
Klimaforschung wird weitgehend von Naturwissenschaften dominiert. Typische Fragen beziehen sich darauf, wie die Klimakomponenten sich gegenseitig beeinflussen, etwa Ozean und Atmosphäre, und wie das Gesamtsystem auf externe „Antriebe“ reagiert, etwa veränderliche Sonnenleistung oder erhöhte Gegenwart von Treibhausgasen. Andere Fragen zielen auf Wahrscheinlichkeitsverteilungen, gerade von Extremen, oder welche Wirkungen das veränderliche Wetter und Klima auf Gesellschaft und natürliche und gestaltete Umwelt haben kann. In diesem Zugang kommt der Mensch, als ein Subjekt mit Werten und Wahrnehmungen, nicht vor: Er ist Teil des „Antriebes“, als das er Substanzen freisetzt. Er ist auch ein Objekt, wenn es darum geht mit den Gefahren und Möglichkeiten des Klimas umzugehen.
Wenn es um diese Rolle geht, so nimmt sich hier die Klimaimpaktforschung, insbesondere in der Geographie, und Wirtschaftswissenschaften, vieler relevanter Fragen an.
Der Mensch ist aber noch mehr, nämlich eben Subjekt, das sich mit Wahrnehmungen und Deutungen konfrontiert sieht, mit gesellschaftlichen Entscheidungen in einem Kontext voller Werte. Dann stellt sich die Gesellschaft dar, als eine Ansammlung von „Stämmen“, die jeweils mit ihren eigenen Werten, Regeln, Deutern und Häuptlingen ausgestattet sind. Ein Stamm mag jener der Küstenbewohner sein, ein anderer der Stamm der Klimaforscher, ein dritter der von NGOs, ein vierter von der von gläubigen Christen: Verschiedene soziale Akteursgruppe, die natürlich nie disjunkt sind, niemals scharf definiert. Die „Stämme“ stelle nur eine konzeptionelle Klammer dar, deren Analogie man nicht zu weit treiben darf. Aber immerhin, die Analogie führt uns zu verschiedenen Wissensansprüchen und deren sozialen Bedeutungen, also zu Fragen der Macht, der Überlegenheit – im Wesentlichen also der Deutungshohheit.
Hier kann eine Rolle von Geisteswissenschaften einsetzen, von der Anthropologie, zur Literaturwissenschaft, Geschichte und Soziologie, um einige zu nennen. Es gilt die verschiedenen Wissensformen und –ansprüche auszuleuchten, der soziale Verankerung, und deren politische Nützlichkeit zu bestimmen. Weitere Fragen gehen dahin, inwieweit Wissenschaft eine besondere Wissensform ist, von der besondere Autorität ausgeht, und ob diese für weltanschaulich oder wirtschaftlich geleitete Interessen eingesetzt wird.
In dem Vortrag wird auch das Konzept der „Klimafalle“ angesprochen, das im gleichnamigen Buch von Hans von Storch und Werner Krauss entwickelt wird. Dieses Konzept hebt darauf ab, dass das wissenschaftliche Wissen zur wesentlichen Legitimation in der politischen Willensbildung geworden ist – jedenfalls im Falle der Klimapolitik. Politische Diskussion über Präferenzen und Werte werden ersetzt durch unendliche Debatten zwischen Warnern und Skeptikern über die „Wahrheit“ des wissenschaftlichen Wissens, aus der sich vorgeblich „alternativlos“ politische Entscheidungen ergeben. sind. Politik wird damit weniger als Ergebnis von Verhandlungen und Abwägung von Werten dargestellt, sondern als aus dem überlegenen Wissen einer der beteiligten sozialen Akteursgruppe – „der“ Wissenschaft - ableitbar beschrieben. Für die Wissenschaft bedeutet es, dass der selbstkritische Diskurs nur noch eingeschränkt möglich ist, weil so ein Diskurs als Bedrohung für die als alternativlos dargestellte Klimapolitik wahrgenommen wird.
28 comments:
Dear Hans,
Thanks for this interesting summary.
The last sentence is worrying. Do you have concrete examples of these ‘Einschränkungen’? As a scientist, belonging to the ‘Stamm der Forscher’, I feel we should resist.
I fully agree that Geisteswissenschaften are relevant. In my opinion it is not only a matter of ‘Präferenzen und Werte’, but also a matter of beliefs (not to be seen as ‘religious belief’, but rather as a technical term used by philosophers, psychologists, and social scientists). Unfortunately many of our colleagues don’t seem to be aware of their beliefs, see for example Roger Pielke sr on http://www.climatedialogue.org/are-regional-models-ready-for-prime-time/#comment-613, who wrote “I suggest we defer using the word ‘believe’ ”.
I have been searching for relevant publications, and found two useful rather general overviews:
http://www.andrew.cmu.edu/user/kk3n/reflectionsclass/philsci.html#conf by Kevin Kelly on Induction and confirmation and http://www.hps.cam.ac.uk/research/se.html by Martin Kusch on Social Epistemology (which is perhaps no Geisteswissentschaft but also relevant in my opinion).
I am still looking for papers that apply this to climate science. A classical paper on scenario development is “Unlearning and backcasting: Rethinking some of the questions we ask about the future”, Technological Forecasting and Social Change Volume 33, Issue 4, July 1988, Pages 325–338.
Another more recent paper is Katie Steele and Charlotte Werndl: Climate Models, Calibration and Confirmation (2012) Accepted for publication in The British Journal for the Philosophy of Science
http://philsci-archive.pitt.edu/9099/1/climatecalibrationconfirmation_Pittsburgh_Final.pdf.
My own view is that it is impossible and meaningless to attempt to ‘refute’ climate models (or, more generally) models of complex systems. More fruitful is the degree of belief, or something like that. Do you agree?
Hi Gerbrand
Thank you for alerting me to your comment.
My view is that climate science must be based on rigorous testable scientific assessments. In the weblog posts discussed at
http://www.climatedialogue.org/are-regional-models-ready-for-prime-time/
this need is clearly shown, in my view.
If predictions (projections) on multi-decadal climate predictions are going to be given to the policymakers and impacts communities and claimed to be robust, they must show skill at predicting CHANGES on multi-decadal time scales in global and regional climate statistics in hindcast runs.
The “belief” issue comes in when policy and political actions are selected and values of individuals, organizations (such as NGOs) ect are introduced . My son very effectively discusses this subject in his book The Honest Broker.
Best Regards
Roger Sr.
Dear Gerbrand,
zou qwrote that "The last sentence is worrying. Do you have concrete examples of these ‘Einschränkungen’? As a scientist, belonging to the ‘Stamm der Forscher’, I feel we should resist." - I think I have, but this would mean to speak about concrete colleagues, and I would prefer to do not. We should "resist", and I think I am among those you openly and publicly resist, but I am meeting not only friendly but also hostile responses when doing so.
"My own view is that it is impossible and meaningless to attempt to ‘refute’ climate models (or, more generally) models of complex systems. More fruitful is the degree of belief, or something like that. Do you agree?" - yes, I would to some extent. In a sense climate models are, as all models, "wrong", but useful for certain problems. Which, is a matter of judgment.
Hi Roger
You wrote
“ If predictions (projections) on multi-decadal climate predictions are going to be given to the policymakers and impacts communities and claimed to be robust, they must show skill at predicting CHANGES on multi-decadal time scales in global and regional climate statistics in hindcast runs.”
First of all, what exactly is robust?
But more importantly: to me it seems that your very statement PROVES that beliefs play a role. Your statement does not describe an observation, but it gives your opinion, or in other words ‘your belief’.
Hi Gerbrand
Thank you for the further feedback.
You write
"But more importantly: to me it seems that your very statement PROVES that beliefs play a role. Your statement does not describe an observation, but it gives your opinion, or in other words ‘your belief’."
I am not presenting a "belief". The multi-decadal climate model predictions (projections) must be shown to be robust, as determined by comparisons between the model prediction and the real world observations.
The models are themselves are hypotheses as I discuss this in my post
http://pielkeclimatesci.wordpress.com/2010/11/15/hypothesis-testing-a-failure-in-the-2007-ipcc-reports/
If the model's are not tested, then are accepting them as a "belief". However, this is not the scientific method.
Best Regards
Roger Sr.
Hi Roger,
Thanks for your patience. We’re making progress.
Where I perceive a belief, you don’t. How come?
My argument was that your statement
[3] “ If predictions (projections) on multi-decadal climate predictions are going to be given to the policymakers and impacts communities and claimed to be robust, they must show skill at predicting CHANGES on multi-decadal time scales in global and regional climate statistics in hindcast runs.”
does not describe an observation, but instead gives your opinion, or in other words ‘your belief’.
You repeated your statement with slightly different wording
[5] “The multi-decadal climate model predictions (projections) must be shown to be robust, as determined by comparisons between the model prediction and the real world observations.“
Again I would interpret this as a belief since it is not based on observations, but you wrote: “I am not presenting a belief". So we disagree.
There is 2500 years of publications on the difference between belief and knowledge. Even today there are many schools of thought. So it is not surprising that we don’t seem to have the same interpretation of the concept of belief, and maybe we should just stop here.
However, we both seem to agree that for a belief to become knowledge it is important that the belief is tested against observations. Correct?
If I apply this to your statements [3] and [5] then my question to you is: what tests are showing that these statements are correct?
Gerbrand
Hi Gerbrand –
We are closing on clarity on our two perspectives. I see now that your view of the term “belief” corresponds to my use of the term hypothesis. It need not be based on observations, but, as we both agree, must be tested against real world data in order to see if it can be falsified or not.
With respect to my #3 and #5 that you list, I presented examples for multi-decadal regional climate predictions (projections) as to how this needs to be tested using hindcast runs in my original guest post. I listed peer reviewed papers that there are show serious shortcomings in the models even being able to replicate the current climate much less than changes in regional climate statistics that may have occurred over the last several decades.
If you agree with me on this perspective (and that your use of “belief” corresponds to my use of “hypothesis”) we have come to closure.
Best Regards
Roger Sr.
Hi Roger,
I think there are subtle differences between the concepts of belief and hypothesis, but they are indeed quite similar so let’s forget about the differences for the moment.
My contention is that your statement (or creed)
“If predictions (projections) on multi-decadal climate predictions are going to be given to the policymakers and impacts communities and claimed to be robust, they must show skill at predicting CHANGES on multi-decadal time scales in global and regional climate statistics in hindcast runs.”
is in itself a hypothesis that one would want to test against observations (note that I am not discussing the testing of models but the testing of your ’creed’ i.e. the correctness of the above statement). I suspect that there are no such observations, which would confirm that you creed is an unproven hypothesis. But perhaps I am wrong. Therefore, I have asked you against which real-world observations your creed has been or could be tested.
I can’t understand why you continue to elude this simple question. So let me try to clarify.
You attempted to underpin your creed by referring to your blog on “the scientific method”, which in turn refers to a science buddies web site (http://www.sciencebuddies.org/science-fair-projects/project_scientific_method.shtml). As it I happens I have a slightly different view of the scientific method. I don’t see a model as a hypothesis, but rather as an attempt to simulate part of reality. You can then investigate to what extent this attempt is successful. A more extended version of my ideas can be found in my “Uncertainties in climate prediction”, http://home.kpn.nl/g.j.komen/Uncertainties.pdf, which appeared on your blog in 2008.
My views are similar to those of Hans von Storch, who summarized nicely and concisely in comment [3] “In a sense climate models are, as all models, "wrong", but useful for certain problems. Which, is a matter of judgment.” In other words belief in models is not black or white, as you seem to suggest, but is a gradual thing.
So it seems that we have (slightly) different perspectives of the scientific method. How would we determine who is right, you or me?
Another open question is: how do you mean ‘robust’?
I am very happy that we are able to have this discussion, and I hope we can still get somewhat further before we close.
Best regards,
Gerbrand
Hi Gerbrand
I am glad we are continuing this discussion. You write
“In a sense climate models are, as all models, "wrong", but useful for certain problems. Which, is a matter of judgment.” In other words belief in models is not black or white, as you seem to suggest, but is a gradual thing.
I agree that no model is perfect. However, we need a quantitative (robust) mathematical (statistical) test to determine if they add value to what is otherwise available. If they do not add value, and indeed can give erroneous results, they are misleading policymakers and the impacts communities. I provided examples of peer reviewed papers in my post at http://www.climatedialogue.org/are-regional-models-ready-for-prime-time/ which provide robust tests of the multi-decadal climate model runs in a hindcast mode.
With respect to the multi-decadal climate runs, as you note I have written that
“If predictions (projections) on multi-decadal climate predictions are going to be given to the policymakers and impacts communities and claimed to be robust, they must show skill at predicting CHANGES on multi-decadal time scales in global and regional climate statistics in hindcast runs.”
To be tested, we first need criteria. As one criterion, for instance, this could be the ability to assess the ability of the model to predict (project) statistically significant changes in the average length of growing season in Amsterdam. Using the historical record, we can determine the observed variations in this metric. If the model runs predict (project) a change in the average length, can we show that the observed data shows such a change that, statistically, agrees with the change predicted by the model?
Other criterion could include July maximum temperature, January annual precipitation, etc.
It is such tests that show why we need to consider models as “hypotheses”. When you introduce “judgment” as Hans does to justify accepting a result, one is adding an appeal to authority. One still needs to specially document what such a “judgment” is based on. If it is just an “expert opinion”, there is clearly the risk of personal biases entering into the judgment.
I look forward you your follow up comment.
Best Regards
Roger Sr.
Hi Roger,
What you write about quantitative model testing is clear. I have been doing this type of testing for many years myself, so I am quite familiar with this.
I think expert judgement as to the usefulness of a model is always biased. Expert judgement may be based on quantitative measures, but there always are subjective aspects as well. That’s why IPCC writes “There is considerable confidence that Atmosphere-Ocean General Circulation Models (AOGCMs) provide credible quantitative estimates of future climate change”, whereas Freeman Dyson insists “I am saying that all predictions concerning climate are highly uncertain.”
It is important to make these subjective aspects explicit. Actually, in my comments on the first order draft of IPCC/AR5/wg1 I argued at some length that IPCC should make their uncertainty assessment and expert judgment more transparent by discussing subjective aspects. [This had no visible impact on the second order draft ;) ]
I also agree that you have to be very careful when you present model results to policy makers.
In the meantime our discussions seems to be diverging. Let me try to summarize where we are (as I see it).
This thread is not on model testing but on the role of humanities in climate science. Humanities discuss such topics as values and beliefs. I commented that not all scientists are aware of their beliefs, quoting you. You reacted, and clarified your position. This led to the identification of what I now call your creed (or belief):
“If predictions (projections) on multi-decadal climate predictions are going to be given to the policymakers and impacts communities and claimed to be robust, they must show skill at predicting CHANGES on multi-decadal time scales in global and regional climate statistics in hindcast runs.”
In your reactions you keep explaining what you mean with the above sentence. However, this is quite clear to me (except for the word ‘robust’, perhaps). So there is no need to repeat this by giving more examples. The point is that I disagree with your creed. I have a slightly different creed (see, e.g., my 2008 post on your blog, http://bit.ly/cCb1n2). So we disagree. Therefore I keep raising the question “Why would your creed be more true than mine?”, trying to make visible that we both have different beliefs.
It looked as if we are stuck here, as we have been going in circles for a while. I am asking you why your creed would be better than mine (A), then you reply by explaining how models should be tested (B), and then (A), (B), (A), (B) . . . [I wonder if there is anyone out there - a communication expert, perhaps - that could get us out of this loop?]
On closer inspection it seems that we have made some progress. I noted that we have (slightly) different perspectives of the scientific method. On one aspect a discussion emerged.
You wrote: “we need to consider models as hypotheses”.
I wrote: “I don’t see a model as a hypothesis, but rather as an attempt to simulate part of reality.”
You then underpinned your view, as follows: ”It is such tests that show why we need to consider models as hypotheses”.
This does not convince me. To me the models themselves are no hypotheses. An example of a hypothesis would be the assertion that a particular model can simulate the evolution of a particular variable over a particular time interval with a given accuracy.
This is illuminating, because it shows that difference in insight is not only a matter of belief, but also of wording and semantics.
Maybe we should just end this discussion by noting that we agree on a number of important aspects (e.g. ‘models should be compared with observations’, ‘current climate models have little or no skill on decadal time scales’, and ‘be careful when you present model results to policy makers’), but also have slightly different views of the scientific method and the use of model results.
I would be interested to learn what the humanities (including social epistemologists) can tell us about the origin of these differences.
Best regards,
Gerbrand
Hi Gerbrand
In your latest comment, you wrote
““There is considerable confidence that Atmosphere-Ocean General Circulation Models (AOGCMs) provide credible quantitative estimates of future climate change”
In my original post, I presented peer reviewed papers that refute this claim. Regardless of their wishes (which is all I see in the claim of subjective skill), they must first counter the findings I report on.
Before we close our discussion, I request you succinctly state what is your “creed” on the issue of multi-decadal climate projections (predictions).
Also, you write
“An example of a hypothesis would be the assertion that a particular model can simulate the evolution of a particular variable over a particular time interval with a given accuracy.”
How is this different than claiming
“a hypothesis would also be the assertion that an ensemble of climate model projections can simulate the envelope of the evolution of the change of the statistics of a particular variable (e.g. temperature or precipitaiotn) over a particular time interval (i.e. several decades) with a given accuracy.” ?
You also write where we agree. We have made progress. :-)
However, do you also accept that
Current climate models have little or no skill on multi- decadal time scales?
With Best Regards
Roger Sr.
Hi Roger,
I agree that IPCC must discuss you papers in AR5. Do they?
Your example of a hypothesis is perfectly OK, as an example. It is just different from my example. I don’t understand how you can claim a hypothesis.
My creed is that you cannot predict the future, but you try all the time, by weighing all information that you have (see http://bit.ly/cCb1n2, under the heading ‘nothing is certain’). I find it useful to explore possible futures with the help of different models. Even if you cannot predict what is going to happen, they help you to explore what could happen.
There is a pitfall when I write “you cannot predict future, but you try to predict the future all the time”, because the word predict has two different meanings in this sentence. What I really mean is this. “You cannot say with certainty how the future will be. But you do make images of how the future might be. Amazingly, sometimes they come (partially) true, sometimes they are far off.”
As to your last question [“However, do you also accept that Current climate models have little or no skill on multi- decadal time scales?”], I don’t know. Models can simulate some aspects of the observed past evolution, but they have been tuned, and even if the hindcasts were correct this does not guarantee that the forecasts will be.
I look forward to the discussion on climatedialogue.org on the useful/uselessness of models for the development of climate scenarios.
Here I would propose to keep the focus on the role of humanities in climate science. They might be helpful in explaining why we have different views.
Best regards,
Gerbrand Komen
Hi Gerbrand
Thank you for your further comment.
We are actually quite close in agreement.
In answer to my question
"Do you also accept that current climate models have little or no skill on multi- decadal time scales?”,
You write that
“I don’t know. Models can simulate some aspects of the observed past evolution, but they have been tuned, and even if the hindcasts were correct this does not guarantee that the forecasts will be.”
I agree.
But then you write
“My creed is that you cannot predict the future, but you try all the time, by weighing all information that you have (see http://bit.ly/cCb1n2, under the heading ‘nothing is certain’). I find it useful to explore possible futures with the help of different models. Even if you cannot predict what is going to happen, they help you to explore what could happen.”
In the sense that the models provide informative sensitivity studies [such as we have done for years with looking at the processes associated with land use change],I agree with you.
However while these, at best, are “possible” [“plausible”] scenarios for the future, they should not be used as the focal point for determining the envelope of future climate for use by the impacts and policy communities.
To do this is really, in my view, communicates an erroneous impression to those communities on how much we know about the future, including how added CO2 will affect the weather.
Using the concept of a “creed” to move beyond the scientific method in order to provide detailed local and regional forecasts (projections) actually seems to me a faith-based approach that I suggest should not be the main framework for use by the impacts and policy communities.
Finally, you asked
“I agree that IPCC must discuss you papers in AR5. Do they?”
The answer is I do not know. They certainly should as well as the issues that Rob Wilby and I brought up in our paper
Pielke Sr., R.A., and R.L. Wilby, 2012: Regional climate downscaling – what’s the point? Eos Forum, 93, No. 5, 52-53, doi:10.1029/2012EO050008. http://pielkeclimatesci.files.wordpress.com/2012/02/r-361.pdf
which directly relates to your comment that models are not hypothesis.
I would assume you agree that they cannot, by themselves, be used to test hypotheses. Then what role do you seen them playing within your construct of a “creed”.
Best Regards
Roger Sr.
Roger, when you write "However while these, at best, are “possible” [“plausible”] scenarios for the future, they should not be used as the focal point for determining the envelope of future climate for use by the impacts and policy communities." you are leaving your field of competence, and your assertion is false. There may be cases, when you are right, but there are other cases, when you are not.
Scenarios are - not necessarily in a formal sense of "envelope of future conditions" - most useful for practitioners for thinking and planning of measures and weighing options, when deciding about time frames of decision processes. Scenarios (projections) are needed tools, to play with ideas, with perspectives full of uncertainties (not only with respect to climate).
Only, if stakeholder would consider scenarios as predictions - most probable evolutions - they run into trouble. They run also in trouble when they forget that others factors and drivers will change as well.
It is therefore, why the difference between predictions and projections - in the IPCC terminology - is so important for our daily interaction with stakeholders.
It took us at our institute a long time to convince "our" stakeholders (mostly from coastal defense) about these issues, limitations and possibilities, but finally - after several years?- we succeeded. What we needed for this success was a clear terminology and a distinct discipline in using this terminology, but also the insight of our fellow oceanographers and meteorologists to look at the problems from the stakeholders' perspective.
I have learned a lot from a book on scenarios based on experiences collected in running big companies without any reference to Earth sciences - maybe you would find it helpful as well:
Schwartz, P., 1991:The art of the long view. John Wiley & Sons, 272 pp
Hi Roger,
Thanks for your reactions. They help me to develop my thinking.
As I now see it, we would need to make a clearer distinction between ‘the scientific method’ and ‘how we communicate our results to policy makers’.
Your creed actually is a mix of both. You write [1] that hypotheses should be tested (1) [this I would see as part of the scientific method]; and [2] model result should not be given to the policymakers and impacts communities and claimed to be robust when they have no proven skill. This second part is really about the science policy interface.
From my point of view the second point has been adequately dealt with by Hans von Storch in his comment #14. I suppose you disagree with him, but in my opinion this is your good right. (And, who knows, you might change your mind after you have read the book that Hans mentioned.]
I propose that we focus in our discussion on the scientific method.
In my view models are no hypotheses by themselves, but I agree with you that one can formulate hypotheses about the relation between model results and reality. You seem to say that the scientific method consists of testing hypotheses by comparing model results with observations (if they are available, otherwise they could be checked ‘in principle’). If they agree, fine; if they disagree we have falsified the model. So this is what I think, that you call the scientific method. Correct me if I am wrong.
My view is different. As Hans said all models (of complex systems) are expected to be wrong in some sense. So you don’t need to falsify them. I find it more constructive to look how modellers work, in practice. Typically, they make simulations and diagnose these by comparing modelled and observed time series, (mean and statistics), but also by zooming in on particular aspects, such as the radiation balance, the representation of the Dobson circulation, the occurrence of ENSO’s en QBO’s, the variability of the THC etc. Look at chapter 8 of IPCC/AR4 and http://cmip.llnl.gov/cmip5/publications/allpublications for many more examples. Models can also be used to explore uncertainty (in sensitivity studies), and this can help formulate research priorities.
In my lectures I used to show http://ipcc.ch/publications_and_data/ar4/wg1/en/figure-8-5.html, which compares observed and multi-model mean precipitation. I then left it to the audience to identify similarities and differences and to formulate their own judgment on the performance of the models. My experience is that people reacted differently.
If you study the occurrence of ENSOs you would be happy if you could predict ENSOs or reproduce the observed statistics, but if this fails you might still look if ENSOs occur at all in the model, and if the physical ingredients for ENSOs to occur are present in the model (built-up of a warm pool, wave propagation in the equatorial waveguide, etc). Models that have these features are more realistic than models that lack these features.
Model development – and science – proceed by trying to improve the representation of relevant processes, making full use of available observations, and if needed by organizing new measurements.
Essential in my view is the existence of a continuous scale on which one can express the quality of a model. So models are not right or wrong, but some models are better than others.
Here I suspect that we disagree. You seem to suggest that models are right of wrong. I consider this distinction as meaningless, and am in favour of a continuous scale.
I look forward to your reaction.
Gerbrand
Hi Gerbrand
In my weblog post
What Are Climate Models? What Do They Do? [http://pielkeclimatesci.wordpress.com/2005/07/page/2/]
I wrote
"There are three types of applications of these models: for process studies, for diagnosis and for forecasting.
Process studies: The application of climate models to improve our understanding of how the system works is a valuable application of these tools. In an essay, I used the term sensitivity study to characterize a process study. In a sensitivity study, a subset of the forcings and/or feedback of the climate system may be perturbed to examine its response. The model of the climate system might be incomplete and not include each of the important feedbacks and forcings.
Diagnosis: The application of climate models, in which observed data is assimilated into the model, to produce an observational analysis that is consistent with our best understanding of the climate system as represented by the manner in which the fundamental concepts and parameterizations are represented. Although not yet applied to climate models, this procedure is used for weather reanalyses (see the NCEP/NCAR 40-Year Reanalysis Project).
Forecasting: The application of climate models to predict the future state of the climate system. Forecasts can be made from a single realization, or from an ensemble of forecasts which are produced by slightly perturbing the initial conditions and/or other aspects of the model. Mike MacCracken, in his very informative response to my Climatic Change essay seeks to differentiate between a prediction and a projection.
With these definitions, the question is where does the IPCC and US National Assessment Models fit? Since the General Circulation [Climate] Models do not contain all of the important climate forcings and feedbacks (as given in the aforementioned 2005 NRC report) the models results must not be interpreted as forecasts. Since they have been applied to project the decadal-averaged weather conditions in the next 50-100 years and more, they cannot be considered as diagnostic models since we do not yet have the observed data to insert into the models. The term projection needs to be reserved for forecasts, as recommended in Figure 6 in R-225.
Therefore, the IPCC and US National Assessments appropriately should be communicated as process studies in the context that they are sensitivity studies. It is a very convoluted argument to state that a projection is not a prediction. The specification to periods of time in the future (e.g., 2050-2059) and the communication in this format is very misleading to the users of this information. This is a very important distinction which has been missed by impact scientists who study climate impacts using the output from these models and by policymakers."
I suggest that you (and Hans) are mixing up “process modeling” applications with “forecasting modeling”. Both have value, but process model results should not be given to the impacts and policy communities packaged as projections with the implicit implication they are predictions. This, however, is what is being done.
Suggesting that a "creed actually is a mix of both" is not, in my view, an appropriate reason to present the regional and local results from the models to the impact and policy community with the implication they are actually probable,while at best, they are only plausible (possible) future outcomes.
Best Regards
Roger Sr.
Roger,
two comments on your assertion.
"I suggest that you (and Hans) are mixing up “process modeling” applications with “forecasting modeling”. Both have value, but process model results should not be given to the impacts and policy communities packaged as projections with the implicit implication they are predictions. This, however, is what is being done."
First, there are some, who "sell" scenarios as predictions; that practice is causing unnecessary complications when communicating with stakeholders. However, most do not, is my impression, and the IPCC terminology helps to avoid this misunderstanding.
Second, models serve also as instruments for constructing scenarios, which are neither process studies nor predictions. You do not like that, I know, but I am seeing this in action - given the role of scenarios in a applied context beyond the meteorological/oceanographic thinking (the Schwartz-book , I referred to before.)
The discussion if models are hypotheses I do not find helpful. The term "model" is used very differently in different quarters of science and society, so that we need in the inter- as well as transdisciplinary context always a clarification of terminology (cf. the Springer-book by Peter Müller and me from 1994, but also Mary Hesse, who describes models in physics in her classical book as preform of a theory. This concept is inconsistent with what we in climatology use the term for.)
Nonsense, the Müller/Storch book is from 2004:
Müller, P., and H. von Storch, 2004: Computer Modelling in Atmospheric and Oceanic Sciences - Building Knowledge. Springer Verlag Berlin - Heidelberg - New York, 304pp, ISN 1437-028X
sorry.
Hi Roger,
You describe three types of application of climate models, and Hans added a fourth. That’s fine with me. I agree. May I assume that you also agree with my description [in comment 15] of model <b> developmen</b>, and the possibility of ranking models according to quality? If so, we seem to agree more or less about what is the scientific method, or in other words, we share the same beliefs about the scientific method.
What remains is the issue of communication with policy makers and the impact community. You write: “(process) model results should not be given to the impacts and policy communities packaged as projections with the implicit implication they are predictions.” Again, I fully agree. When presenting climate scenario’s one should use very careful wording.
To end our discussion, please enlighten me. Suppose you are in dialogue with a policy maker, how would you make a distinction between plausible (e.g. 0.5 m sea level rise in the 21th century) and implausible (e.g. 50 m sea level rise in the 21th century) scenarios?
I look forward to your answer.
Best Regards,
Gerbrand
Hi Hans
The majority of the impacts and policy communities, in my experience, interpret the multi-decadal climate scenarios as predictions.
Demetris Koutsoyiannis effectively illustrates that the general interpretation by the impacts and policy communities is that model runs are “predictions” [see his comment on your weblog post – http://klimazwiebel.blogspot.com/2013/07/prediction-or-projection-nomenclature.html]
Anytime the word “will” is used with respect to model runs for decades into the future, as well when the results are presented maps with specific decadal periods (e.g. 2050-2059), they are interpreting the model results as predictions (if the specified emission inventory occurred).
You wrote, with respect to the interpretation as prediction that
“However, most do not, is my impression, and the IPCC terminology helps to avoid this misunderstanding.”
My impression is quite different.
I recommend, if you agree, that we complete a survey to assess how the impact and policy communities interpret the multi-decadal climate forecasts. I would be glad (and prefer) to assist in preparing such a survey.
My other challenge to you is to present an example of the use of your perspective in the medical profession when they are examining whether or not to approve a drug for human application.
When is a medical decision on the use of a drug not based on models in conjunction with real world tests of its efficacy?
Thank you for continuing to engage on what I (and quite a few others) see as a still unresolved issue with the IPCC assessment approach.
Best Regards
Roger Sr.
Hi Roger,
I reviewed our whole dialogue. It was instructive. What I learned is that framing may be more important than values and beliefs.
I also read your valuable comment #20 to Hans von Storch.
I agree with you that there still is a lot of misunderstanding. And I praise Hans for his efforts with the impact community. The word “will” should be forbidden in communications about future climate change. In drafts, written by others, I used to change “will” into “on the basis of current understanding we expect”.
Also, in my opinion, there is further need for semantic clarification regarding what I call “making images of possible futures.”
I am still left with one question, which I asked in comment #19: “Suppose you are in dialogue with a policy maker, how would you make a distinction between plausible (e.g. 0.5 m sea level rise in the 21th century) and implausible (e.g. 50 m sea level rise in the 21th century) scenarios? “
I hope you are willing to reply, because this might clarify where you stand.
Best Regards,
Gerbrand
Hi Gerbrand
Thank you again for continuing the discussion. We still disagree on one very important aspect. You suggest replacing “will” with “on the basis of current understanding we expect”. This is not much of a distinction. :-)
Indeed, using Hans’s definition that a “prediction” is a probable outcome, and a “projection” is a possible outcome, your use of what “we expect” fits into the prediction category. Your terminology would perpetuate the misinterpretation of the skill of the multi-decadal climate runs by the impacts and policy communities.
You also asked again with respect to how I would answer
“Suppose you are in dialogue with a policy maker, how would you make a distinction between plausible (e.g. 0.5 m sea level rise in the 21th century) and implausible (e.g. 50 m sea level rise in the 21th century) scenarios?”
I will expand on the answer I provided in my earlier reply. I would tell them that a rise of 0.5 meters during the coming decades is a plausible threat but we cannot assign a probability of this occurring. I would recommend measures to reduce the threat to such an increase. A rise of 50m is much less likely, but its possibility is not zero. I would, however, not recommend specific action against such a threat.
Finally, I still would like to see your (and Hans) answer as to what type of modeling application falls outside of diagnostic, process and predictive modeling application uses. You said Hans has introduced a fourth type. Please define in a couple of sentences.
My other (related) question [which I listed in my reply to Hans] is
"My other challenge to you is to present an example of the use of your perspective in the medical profession when they are examining whether or not to approve a drug for human application."
When is a medical decision on the use of a drug not based on models in conjunction with real world tests of its efficacy?”
I would like your answer on this. You could use this medical example to illustrate how your and Hans fourth type of modeling application applies.
Best Regards
Roger
Hi Roger,
I try to avoid using the words prediction and projection, because, as I indicated, these words are in my opinion still in need of semantic clarification.
I think there is a huge difference between “will” and “on the basis of current knowledge we expect”. Implicit in the second formulation is uncertainty, because current knowledge is imperfect and is likely to change. By using the word “expect” I try to indicate that I make a statement about what I expect that could happen, based on my subjective judgment.
My judgement would be, typically, that we do not know what will happen but that we can formulate a number of different plausible scenarios, to which no objective likelihood can be attached.
I would make a distinction between plausible and implausible scenarios by weighing all evidence, including evidence from what you call model process studies.
On the basis of what evidence would you decide what is plausible and what not?
Unlike you, I would never make any recommendations as to what action should or should not be taken.
Models were useful when we constructed the KNMI climate change scenario’s. For example, by looking at CMIP model output we realized that changes in wind regime were conceivable and highly relevant, since the dominant wind direction has a major effect on the weather in the Netherlands. So we constructed scenario’s that took this possibility into account.
Drug approval and the approval of climate policy seem quite different to me. Anyway, as a scientist, I’m not in the business of approving action, so I prefer to refrain from responding on this issue.
Best Regards,
Gerbrand
Hi Gerbrand
In answer to your response, in my reading of the phrase “expect to happen”, I do not see a distinction with the use of “will”. In terms of recommending action, I accept my son’s perspective that he presents in his book
The Honest Broker - http://sciencepolicy.colorado.edu/publications/special/honest_broker/
As scientists, we can chose to be advocates for a particular policy view (and I see this as what the IPCC has done) or present the spectrum of available policy options based on the diversity of scientifically valid viewpoints [i. e. the honest broker approach]. We need to iterate with policymakers to do this, rather than just present policymakers with our scientific conclusions and then just let them take it from there.
On your example
“Models were useful when we constructed the KNMI climate change scenario’s. For example, by looking at CMIP model output we realized that changes in wind regime were conceivable and highly relevant, since the dominant wind direction has a major effect on the weather in the Netherlands. So we constructed scenario’s that took this possibility into account.”
This clearly is an example of a process (sensitivity) model study being used to create scenarios. I have no problem with doing this, but it needs to be emphasized when presented to the impacts and policy communities that there is no demonstrated predictive skill in the quantitative results that are being given to them. The CMIP models have shown no skill at predicting, in hindcast, changes in wind regimes over the Netherlands.
Using this “will occur”, these results are to “be expected”, decadal periods in the future, etc is misleading the impacts and policy communities.
On your reply
“Drug approval and the approval of climate policy seem quite different to me. Anyway, as a scientist, I’m not in the business of approving action, so I prefer to refrain from responding on this issue.”
they are quite similar in terms of the science/policy interface. You also are already involved in the policy arena as you are involved in the IPCC process.
Finally, you still have not answered how your and Hans’s view of modeling application includes a fourth type.
Best Regards
Roger Sr.
Hi Roger,
You write
”Finally, you still have not answered how your and Hans’s view of modeling application includes a fourth type.”
Maybe Hans can answer. My point was that you can use models for the development of scenario’s. I hope this is clear now.
It’s correct that I have been active in the science-policy interface, as part of my job. I even had a chance to reflect on this in a discussion with your son, whose book I highly recommend.
Let me once more express my gratitude for your patience. I enjoyed the discussion and I learned a lot.
Perhaps we will meet in future in some other thread.
Thanks again,
Gerbrand
Sure, Roger - a forth application is the construction of scenarios. This is an application mostly for planners and decision makers, rarely for meteorological sciences. Look at the Schwartz book.
If you do not accept this, you must demonstrate that this tool is unsuitable for this purpose.
Hi Hans
Your fourth type of model application is using the process model application type and just renaming them as scenarios. There is really no fourth type.
I have already documented the inadequacies of this approach in my guest post and comments at
http://www.climatedialogue.org/are-regional-models-ready-for-prime-time/
as well as on your weblog post
http://klimazwiebel.blogspot.com/2013/07/prediction-or-projection-nomenclature.html
What are being provided as "scenarios" to the impacts and policy communities are just model results with no demonstrated skill with respect to the climate metrics of climate change on multi-decadal time scales that they are requesting.
I have shown that the use of scenarios as you define, with any claim that they have skill, is misleading those communities.
Best Regards
Roger Sr.
Hi Gerbrand
I enjoyed our exchange of views also! I look forward to discussing again in the future both on the internet and in person.
With My Best Regards
Roger
Post a Comment (pop-up window,non-moderated)