Saturday, March 6, 2010

Climate models and the laws of physics


Climate science is solid because it is based on the laws of physics, we hear sometimes, but perhaps this sentence conveys subliminally a level of uncertainty that is debatable. Even if the laws of physics are perfectly known, calculations based on these laws may be just approximate. This is the case for climate models. A simple comparison of the mean simulated by climate models and real data shows that the story is not that simple


Perhaps some of you will be a bit surprised to read that very few problems in physics can be solved exactly. By 'exactly' physicists mean that a closed formula that can be written on paper has been found that allows to predict or describe the behavior of a physical system without. Physics students get the very few interesting problems that are solvable exactly and spend quite a lot of time analysing them because they form the template for the approximate solutions for more complex problems. For instance, the classical harmonic oscillator is a simple problem that we all learn at school and can applied to calculate approximately the frequency of oscillation of a pendulum, which is a much more complex problem. Similarly, the movement of two point bodies that attract each other gravitationally is an easy problem - solved by Newton a few centuries ago- can be used to describe approximately the movement to 3 point bodies. I wrote approximately, because a solution of these apparently simple problem is not known. It is not even known if the solar system is stable or unstable. The list of exactly solvable problems is pretty short. To these two mentioned cases we may two other from quantum physics: the harmonic oscillator and two charged particles under electrostatic attraction (e.g the hydrogen atom).

To describe the dynamics of the climate is much more complex problem that all these previous examples. It can be only described approximately with the help of very complex numerical models. It can be argued that, nevertheless, the level of approximation is enough for our purposes, since they are based on the same 'laws of physics'. A quick look at result presented in the last IPCC report indicates that this is really not that simple. Lets have a look at the mean annual temperature for the present climate as simulated by the IPCC model http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-8-2.html

  Upper panel: observed mean near-surface annual temperature (contours) and the difference between observed temperature and the mean of all IPCC models (color shading); lower panel: typical error of an individual model.

This figure, reproduced from the IPCC report shows the typical error of one of the IPCC models difference between the simulated and observed temperature at each grid-cell). We see differences of the order of 1 to 5 degrees, and the areas where the difference is 3 degrees or larger are really not negligible. For illustration,  3 degrees is the difference between the annual mean temperature in Madrid and Casablanca, or between Goteborg and Paris. Some of these differences are caused because climate models cannot represent well the topography of the surface due to their coarse resolution. This is probably the case for Greenland and the Himalayas. Other factors are the uncertainties in the observations (Antarctica).But nevertheless, I think that this is not a reassuring picture. We also have to take into account that climate modellers have certainly optimized the free parameters in their models, among others the uncertainty in the present solar output but also others internal parameters, to try to achieve the best fit to observations. And yet this fit is not that good. The situation for other variables such as precipitation, wind etc, is not better.

This is no surprising news and nobody has tried to hide these discrepancies. They can be found in the IPCC report without difficulty. They do not mean necessarily that climate models are bad in simulating climate changes from the present state. It could well be that they are skillful in representing the reactions of the climate under perturbations of the external driving factors, such as CO2. But if they are based in the very same well known 'laws of physics', why is it not possible to simulate the present Earth climate with the accuracy that we require? Some of these errors are as large as the projected temperature changes in the future. Are we missing something fundamental? 

Quote from the IPCC AR4 Report :'The extent to which these systematic model errors affect a model’s response to external perturbations is unknown, but may be significant'



68 comments:

  1. One thing that intuitively bothers me is the use of a dynamic model. IOW a weather model...ran forward for a huge amount of time. We know that it does not accurately predict weather more than 10 days out or so. What makes us think that despite this problem, it will predict climate?

    This is not to say that the answer coming out is "wrong". But I really wonder how much the whiz-bang of the model is really giving us an independent right answer and how much we could do just as well with some linear extrapolations or the like. I mean the GCM is NOT calibrated over the new CO2 concentrations, it's not accurate in details (either at an area OR time detail of resolution). But we are supposed to assume that it is independently giving evidence for longer-term global average effects? Why?

    I can't exactly label what makes me uneasy about this, Eduardo. But imagine that this were semiconductor physics. Would this kind of thing be trusted to give useful independent insights? Or would we want some different sort of model, or would really phenomenolgical trends be the real explaining power, with all the modeling being a fancy affair that does not really add new certainty?

    ReplyDelete
  2. Well here is a much too simple answer. But the statement that it must be impossible to predict the climate because it is so hard to predict the weather has been made again and again on this blog.

    Look at the example of rolling a dice. Predictions about the next upcoming number are indeed impossible. But it is really easy to predict what the distribution of numbers will be after hundreds of tries.

    Climate is exactly this: average weather.

    ReplyDelete
  3. Falk,
    the answer is indeed too simple. It would provide a kind of argument only in case of a stable climate.

    In case of climate change, the problems of predicting climate and weather are not very different.
    Both are statistics of the actual state of the atmosphere/climate system (remember we are dealing with Reynolds equations, averaged Navier-Stokes equations).
    Both predictions are likely to be influenced in amplitude and phase, e.g. by the initial condition, specifically by the initial state of the ocean.
    The recent paper on decadal predictions by Smith et al., Keenlyside et al. and Pohlmann et al illustrate this well.

    Whether the GCMs are able to describe a changing climate has yet not been sufficiently addressed,
    and I would doubt this looking at the errors in ocean temperature as eg reported by the IPCC report. The ocean has a
    characteristic time scale of about 1000 years, projections of 100 years ahead are very unlikely to
    be independent of the initial condition.

    I personally think it was a big fault to use the models for these kind of assessments without a thorough and critical identification of the limits and skills of the models including the specific set-ups used here.

    Not to be misunderstood here: I think complex 3-d models are great and powerful tools, but overstretching the area of validity is a great danger particularly due to the degree of realism in these models.


    Corinna

    ReplyDelete
  4. Great article Eduardo. We need more of these. The same analysis has started in Models On - and Off - the Catwalk

    - Corinna, please can you provide the references for the papers you mentioned.

    - Falk Schützenmeister, this would assume that each element of the climate system was something approaching a gaussian distribution about the mean (and that we know what "the mean" actually is).
    Apart from measuring people's height and tossing coins, most results in the real world don't follow this.

    Past climate shows us that the system is more than a number of independent processes averaging out.

    ReplyDelete
  5. Well, I did not evaluate any climate model but I tried to discard an obviously incorrect argument.

    Even if climate models have a number of problems, they cannot be evaluated by weather prediction which is a whole different business.

    Of course there are also big similarities between the climate and weather models. But this does not contradict my example. I talked also about the same dice.

    And everyone will probably agree that it is much easier to predict that the summer will be warm than that it will be rainy in 10 days in Germany.

    I also think that it is out of doubt that climate models can predict climate change under varying physical conditions.

    The first practical application of GCMs was nuclear winter research. The change of the climate system by a nuclear war was assumed to be so severe that this primitive models would tell us something about the consequences. I do not know how today's climate models react to a nuclear war input. Does someone know research about this?

    The question is whether climate models are sensitive ENOUGH to predict man-made climate change. I believe so but I am not an expert and I would not make a statement about that.

    By the way, most climate models are not stochastic models (in contrast to many weather prediction models) they are rather deterministic. Those pure models do not depend on initial conditions.

    However, parametrization becomes more and more important. There is of course a lot of controversy about that. But in this case models depend more and more on initial conditions. And get more messy.

    ReplyDelete
  6. Falk / 5 - just for curiosity: Why do you "think that it is out of doubt that climate models can predict climate change under varying physical conditions"?
    Also this assertion is relatively broad: By the way, most climate models are not stochastic models (in contrast to many weather prediction models) they are rather deterministic. Those pure models do not depend on initial conditions." Rather deterministic? - what do you mean by that?

    ReplyDelete
  7. #Steve:- Corinna, please can you provide the references for the papers you mentioned.

    Of course:
    http://www.sciencemag.org/cgi/content/abstract/317/5839/796
    http://www.nature.com/nature/journal/v453/n7191/abs/nature06921.html
    http://ams.allenpress.com/perlserv/?request=get-abstract&doi=10.1175%2F2009JCLI2535.1

    ReplyDelete
  8. @Hans

    ad 1) I should not have used the term "climate change" since it refers also to the political issue. I should have written climate variability or something like this. However, climate models are fairly sensitive to extreme events (e.g. Pinatubo eruption). Whether it is enough to make the case for man-made climate change is another question. I am not a climate researcher and a too bad of a statistician to argue for that.

    However, if climate models would not be at the center of a political conflict they would be very, very impressive, almost to everyone who loves science.

    ad 2) There are two principal forms of models: stochastic and deterministic. The first one assumes that a statistic relationships between data allows the prediction of a system (that indeed needs to be stable) without knowing how it actual works. A simple form is a regression model. A meteorological example is the precipitation chance that is given in the US weather forecast and which is (as I know) entirely based on historical data.

    However, most climate models work by the mathematical description of actual flow processes (energy). It is the attempt to model the mechanics and the thermodynamics of the climate or increasingly the earth system.

    Of course there is also statistics involved (e.g. the swampy ocean and the foggy air in FAR). However it is considered as something that has to be overcome by more research and faster computers.

    Of course, Hans, you know all this and my terminology might be a little off, I am a sociologist studying the history of climate research. I assume that you asked the questions for clarification for other readers. Thanks.

    ReplyDelete
  9. Hans:

    "rather deterministic" in this sentence means "instead deterministic", not that they are "a little" or "somewhat" deterministic deterministic. It's easy to misunderstand.

    ReplyDelete
  10. One to many "determinitstic" above i think:-)...

    ReplyDelete
  11. Eduardo
    'Some of these errors are as large as the projected temperature changes in the future. Are we missing something fundamental?'

    I think it is important not to lose sight of this question in Eduardo's post. It raises the problem of attribution (of the "A" in AGW) since climate model predictions are used to demonstrate that the assumptions made by AGW theory are very close to observed changes. If this exercise becomes questionable because of too large deviations of observations from model predictions then we need to reconsider the role of models for climate policy.

    This problem is in its logic identical to the question of a MWP: there are those who think climate policy depends on a definitive answer to the question raised.

    And here, as in the MWP controversy, alarmists and sceptics think that the science will tell us what is legitimate policy. This is the great illusion and the great danger, for science and policy.

    ReplyDelete
  12. @ 1
    TCO,

    I think your addressing the question of the validation of climate models. And indeed it is a very difficult question. Think in terms of cosmology rather than in terms of semiconductor physics. No experiments can be conducted, we have just one observed time evolution over a short period of time and we try to construct models tho predict the future evolution. There is indeed a danger of model design that overfits the observations for the wrong reasons. That is the reason for the interest in climate reconstructions of the past centuries: to have another, independent, data set in which climate models can be tested.

    Yes, I can understand that you feel uneasy. But as it is unwise to praise climate models as perfect tools, I think it is also unwise to dismiss them. It is reasonable to think that higher GHG concentrations go with higher temperatures and models are so far the only tool to try to estimate those temperature changes. Just because climate models are imperfect and have limitations, the problem is not blown over.

    I do not think that climate scientist are less intelligent that material physicists. Without the possibility of experiments, the problem is just hard.

    Mere extrapolation of trends ? CO2 concentrations are already outside the natural range of the last, say, one million years. If one thinks CO2 is an important factor, simple extrapolation is not a consistent solution.

    ReplyDelete
  13. Falk,
    the problems of GCMs will very likely not to be solved by higher resolution and faster computers in the near future. They are much more fundamental and many
    relevant processes are still not incorporated in the GCM for various reasons.

    The climate models are only partly based on the so-called first principles (which you called determenistic I guess) but to a very large extend based on empirical process parameterisation. For many relevant processes the govering equations are either not known or far too complex to be solved in 3-d climate models.
    These are not a few minor points which add here and there 1-2% uncertainty but these parameterizations have the potential to strongly control the modelled energy flow and to limit (and or introduce) feedbacks (a modeller saying is shit in-shit out). Parameterizations are in particular involved when the different parts of the climate sub-systems communicate with each other and exchange energy.
    They are for sure partly responsible for the model errors in reproducing the average present day conditions, and it is very likely that this gets more relevant when we focus on small (compared to the model errors presented in the latest IPCC report) changes in the climate system.

    Even for the part of the models which is based on the first principles, i.e. the fluid dynamic equations we have to employ parameterizations to make the equations solveable: The equations are non-linear and not solveable for complex orography. We use parameterizations of sub-scale turbulence here (by the way, the turbulence parameterisations used in GCM are not very advanced).

    Moreover some of the parameterisations, eg the radiative properties of aerosols or the vertical turbulence in the ocean are used to tune the models and are not implemented to be closest to process observations.

    Finally, (again) GCMs are very dependent on the initial conditions, particularly on the ocean initial conditions.
    This is a key problem, since the ocean initial condition is not known, we simple do not have sufficient observations.
    I have not seen detailed analysis for the IPCC models, but saw in my early years in modelling results of a spin-up integration for an ocean only model,
    from that and my general knowledge on oceanography I would expect that the ocean initial condition would influence the GCMs for a minimum of about 500-1000 years.

    ReplyDelete
  14. #Reiner: I think it is important not to lose sight of this question in Eduardo's post. It raises the problem of attribution (of the "A" in AGW) since climate model predictions are used to demonstrate that the assumptions made by AGW theory are very close to observed changes. If this exercise becomes questionable because of too large deviations of observations from model predictions then we need to reconsider the role of models for climate policy.

    This is indeed the point.

    Modelling the climate system is a very complex problem. We have started to do so using the same models which have been used for
    weather prediction and were learning by doing how the models have to be further developed (including the ocean, cryosphere, bio-geosphere etc), i.e. the IPCC has
    contributed a lot by the way to the advancement of climate modelling.

    However, we are still far away, only now developing Earth system carbon cycle models which can really adress the
    AGW problem, and still learning a lot by these excercises.

    My position here is somewhat different to Eduardos (if I understand your position right, Eduardo):
    I feel GCMs are powerful tools and should be used in science. A scientific paper
    has room to discuss the setup and the limitations of the model (which the IPCC report actually had not) and should
    adress the question whether a model is applicable for a certain study.
    The published research is than of course part of the IPCC assessment.
    However, using the models as assessment tools, as done in the last IPCC assessment report, should be avoided. There was not enough room for a critical discussion of the
    limitations and no successful passing of limitation of the modelling excercise to the general public (eg. executive summary)
    From the AR4 it is even impossible to find out the basic set-up information.
    The way the model assessments were performed for the last IPCC report (and likely for the next), namely as coordinated effort of the
    entire community, prevents the possible conclusion that the models are not appropriate for the task to assess the consequences of climate change.

    ReplyDelete
  15. As a layman it seems very plausible to me, that we can predict a warming even if we can't simulate the shortscale climate fluctuations.

    Even if the models are very complex, a simple radiation balance should be enough to predict a continous warming for the next decades.

    But, also as a layman, I wonder if we really understand the natural climate variations. All climate models are based on climate feedbacks. If it gets 1° warmer, the feedbacks produce 2° more for free.

    But lets imagine, that the natural climate variations stop the warming process. There will be no more feedbacks either.

    Wouldn't we first have to understand the really big climate shifts of the past, before "we" pretend to understand the climate?

    If a doubling in Co2 ALWAYS results in an enormous warming, this will also happen to us.

    We wouldn't need precise decadal climate models if we knew that. Why spend all this money if the solution is so simple and can be found in the past?

    ReplyDelete
  16. @ 15,

    yes, models predict global warming. The question is that how much warming depends on the feedbacks, and that causes that each model produces a different answer, so that in the end there is a range of answers, not just one.

    'But lets imagine, that the natural climate variations stop the warming process. There will be no more feedbacks either'

    this is not entirely correct. Even if natural fluctuations stop warming (and feedbacks as you said) the CO2 perturbation is there still, so that when those natural fluctuations disappear, the warming resumes.

    And yes, to understand natural fluctuations is very important. It is also important to be aware of what is known and what it isn't, and of what models can reasonably simulate and of what they can't. Black or white impressions are not fair

    ReplyDelete
  17. @ 14
    Corinna,

    I think that the question of whether models can be used for assessments or not should be decided by the policymaker themselves, of course with the technical help of scientist. They should be more active asking the scientist about possibilities and limitations of models, and they know better than us what they want to know.

    For instance, the UN Framework convention would set -up a list of questions, and the IPCC (this time free of government intervention) would try to answer those questions

    ReplyDelete
  18. Eduardo,

    During the last years I have worked next to climate science and had to deal with the consequences of the climate model assessments in the field of applied marine ecosystem research (related to decision making).
    The research funding turned almost completely into climate impact assessment
    and I had to deal with EU-research projects in which we were asked to predict (!!) the regional consequences of climate change in about 100 yrs.

    There are still very unrealistic expectations about the possibilities for climate impact assessment around and the understanding about the limitations of
    GCMs, our foundation for the regional assessments, is not developed.
    During almost any of the discussions about climate modelling I attended in this context, I have met the attitude
    that it is not our (climate impact community) task to review the climate models suitability for impact assessment, but it was considered the IPCCs task.

    This seem to be a very general understanding of the responsibility of the climate science and IPCC in particular. However, this expectation is not fulfilled by the IPCC (climate modellers),
    tools are provided, without accepting the responsibility for the assessment of the general suitability of this tool for the task.

    I don´t think that we can leave the responsibility for the assessment method to the politicians and decision makes, it is our (your) task as (climate-)scientists. In case we engage ourselves in assessment we must start a dialog to identify expectations, raise questions and speak about obvious misconceptions, before proposing a certain tool for assessment. This is the responsibility of scientists which get engaged in assessment.


    You say that nobody tried to hide the uncertainties from the climate models and referred to the IPCC report. This is certainly true, information about limitations are presented,
    eg. in the validation chapter. However, this is done less clearly and conclusive than possible and we have to admit, that this has largely been overlooked by all:
    related research fields, politicians and decision makers and the general public.

    ReplyDelete
  19. Ed: I completely agree that it is reasonable to expect an increase from CO2, both because of the observed trend as well as physical intuition. And I even think the "heat in the pipeline" in the ocean as well as the "water vapor amplification" are reasonable physical intuitions.

    My visceral concern is just with the "form" of the modeling. I kinda wonder about a dynamic weather model...that is an approach that seems so different from situation of different long term steady states. It just "feels" like somehow they are using the wrong tool. I really can't express it properly, since I am not that acquainted with modeling in either climate or physics (that said...I've learned to listen to these feelings of mine...I just strain for a good explanation). Like when a freshman chemistry student uses kinetic equations rather than equilibrium or using density functional theory spaghetti band modeling for something that is more molecular. I know that they can get the models to "behave"...but given the lack of direct validation, really the lack of indirect validation, and the "handles" available for training...I really wonder if the model is giving more insight than we would have from some very simple trend extrapolation (perhaps with a term for "in the pipeline").

    I don't really know cosmology. Just am too unqualified. Some examples of complex modeling exist though (structural effects of explosions, nuclear explosion modeling). MAybe nuclear reactor core operation is an interesting example of successful modeling. I know that in that field, different models are useful for different time-scales, and there is lots of validation. I just feel better about it.

    You might take a look at the book, Blind Lake by Wilson. There is an example of modeling of telescope pictures, where they keep modeling when the input stops (iow the program dreams). I'm not a silly denialist and trying to say climate models are exactly the same thing...but just that it is an interesting concept to read about and may spur some thinking for you.

    ReplyDelete
  20. Never read a book on deterministic chaos?


    There are mathematical theories that tell you how good a model can predict something. The issue is known since Henri Poincaré, who studied more-body problems in the 1890ies and realized that even in a fully deterministic environment, the exact state of a dynamical system can only be predicted if you know its initial state infinitely (!) well. Since measurements of an initial state are only accurate to a certain level, slightly different initial states must lead to large deviations in the computed end state. Another view on this issue is to say that the computation of a dynamical system is done in finite steps. Even if you entered the initial state infinitely correctly, the discretization of the prediction method will lead to large errors after a while.

    Non-linear systems add in complexity, that they destroy the information on the initial state after some time. Alternatively, there is an issue in non-linear systems with tipping points. If there are meta-stable states ("attractors") in a dynamical system, the set of initial conditions where the dynamical system reaches the attractor usually is of a complex topology, and neighboring initial states may reach different attractors. If you are lucky, a dynamical system produces only few attractors that fall on more-dimensional topological manifolds with not more than, say, ten dimensions.

    The worst models in terms of predictability are open dynamical sytems, where there are non-constant external drivers. (I believe that GCM fall into this category, however.)

    How do computer modellers usually cope with non-linear, deterministic, open systems? First, they internalize the external drivers and incoroporate them into the model - more often than not by setting them constant or letting them grow linearly. Both is wrong, of course, but computer power is limited. As for the identification of attractors, simulations of dynamical systems are typically run several 100.000 times and the structure of the space of end states is analyzed with statistical methods.

    Not being a climate scientist, I can only speculate on how climate modellers cope with the non-predictability aspects of their research. Certainly, Navier-Stokes equations describe non-linear, open, complex dynamics.

    What I would love to see in the IPCC reports is for a single climate model a few dozens of the predicted average climates over a period of, maybe, the years from 2080 to 2100, where each simulation is started with slightly different initial and / or boundary conditions. If all runs of the climate model produce the same results, this would be a hint that the models are stable enough to actually be relied on. So far, I am not at all convinced that the climate modelers are even aware of the mathematical issues of dynamical models regarding predictability.

    Could you please tell me hwo climate models are run?

    ReplyDelete
  21. For Corinna and Eduardo:

    Thanks for your fascinating comments.

    I think especially as Eduardo said: "Black and white impressions are not fair"

    I'm a skeptic - not of the physics but of our ability to understand & model climate to the level required for attribution and forecasting.

    But in the polarized world of debate that has characterized climate it is easy (inevitable?) to choose "a black or white" position on climate models.

    It is more challenging to find a position which is not black or white and which is fairer to the (difficult to interpret) evidence.

    Many people outside of the "AGW community" really want to understand this subject better, and at the moment naturally fall into the "models are not reliable, therefore we can know nothing" camp. I believe this is not the right conclusion.

    But those best placed to enlighten us (the outsiders) about the value of models are mostly more engaged in polemic than discussion (or silent), and repeating mantras which make people not already in the AGW camp more rather than less skeptical.

    So I am very happy to see "The Climate Onion" (is this the right translation of Die Klimazwiebel?) open this subject up.

    Hopefully there will be much more. It is badly needed.

    (Corinna, thanks also for the references)

    ReplyDelete
  22. Dear Corinna,

    perhaps my comment was a bit confusing. I was proposing that that policy maker should decide to what extent models are useful for them, with technical help from scientist. An illustration of the process would be :
    - Policymaker: can you tell me if summer precipitation in the Baltic region in 2050 will be higher than today and with which certainty?
    Scientist: no, I cant.
    Policymaker: your model is not useful for me.

    or

    Scientist: yes, it may be with a probability of 70%
    Policymaker: Ok, thanks. It is uncertain but useful information

    I think this process is more practical than to extol about the physics of climate in a report for policy makers.


    About (not) hiding uncertainties in the IPCC: I think the IPCC should be policy relevant, but based on the solidity of its science and not because of intense lobbying. If the policymakers didnt find the uncertainties explicit in the IPCC, it is their fault, not the fault of the scientist. Perhaps the present climate impacts discussions is too strongly scientist-driven. The policymakers are the ones responsible for decisions, they already know how important (or not) climate impacts could be, so please let them ask the question they feel they need to know, and let the scientist respond, as any technical advisor would do.

    ReplyDelete
  23. Dear Eduardo,

    you said "Scientist: yes, it may be with a probability of 70%". This sentence raises a few questions:
    1. What exactly is the nature of a probability assessment in climate models?
    2. Is there a notion of ergodicity in climate models and has it been studied?
    3. How often are climate models run based on similar but not exactly equal input in order to assess the stability of the outcome?

    ReplyDelete
  24. @20,23

    Dear Bjorn,
    There are different type of climate simulations, but the ones you are referring to, are usually performed in the following way. This is a simplified account, but I hope it contains the basic steps.


    The model representing the climate is started from a situation of rest, i.e. zero velocities, with the present external drivers (solar irradiance, concentrations of greenhouse gases). the simulation is continued until a stationary state is reached. This means that the mean annual temperature is just fluctuating around a certain value, but there are no long-term trends any more. A simulated period in this stationary state would represent a sample to extract the statistics of the present climate, mean, variability, cross-covariances, etc.

    Data at several of these time-steps after a stationary state has been reached are saved. These are the initial conditions for a transient climate change simulation: the concentration of CO2 is thereby elevated according to a predefined scenario, and the model is integrated for 100 years.

    The simulation is repeated starting from the different initial conditions (this is the point you addressed in your comment). For the IPCC report the transient simulations were initiated starting from 3 or 5 different initial conditions. The data of these simulations are open to everyone here http://www-pcmdi.llnl.gov/ipcc/about_ipcc.php

    In attribution studies, where the observed climate change trends are compared with what is expected from model simulations, the variability in the simulated target patterns (obtained from the simulations with different initial conditions is also taken into account. This variability is rather small, i.e. each model produces a fairly consistent climate change signal with some small variations. The reason is that the external forcing by year 2100 caused by CO2 is extremely large, and this without any feed-backs. Globally averaged it is about 4 watts/m2, to be compared by possible variations of solar forcing in the past 1000 years, which are of the order of 1 watts/m2 at most.

    If you are interested I can produce the climate signal pattern from 5 simulations of a 'good' model .

    To my knowledge, there are very few studies about the climate phase space (ergodicity and other dynamical properties). Consider that these simulations are very costly. Only in the recent years have models been integrated for longer than a 200 years or so. millennial simulations are very rare.

    For some particular aspects, possible bifurcations of the climate have been indeed considered. This is mostly the case for the thermohaline circulation in the North Atlantic ocean

    http://en.wikipedia.org/wiki/Thermohaline

    but things like strange attractors and the like are really not central in climate studies.

    Concerning the predictability, that you and Corinna were referring, there are different opinions. Most would say that, for climate change applications, it is not that relevant, as the climate change signal is determined by the external forcing, and this signal is somewhat model dependent but for each model it is unequivocal. Corinna is of the opinion that initial conditions are important. This is also what Pielke Sr. thinks. I am not really qualified here, but my guess is that for decadal predictions they are clearly relevant, but I am not convinced that for climate change with a time horizon of 100 years they are really that important. In my opinion at this timescales the important things happen in the atmosphere, but my view may be partial. I think we lack the computing power to answer that question now.

    ReplyDelete
  25. @23,

    the concept of probability in a prediction is indeed slippery and I do not think that a untrained policymaker would grasp it fully.

    I was moistly referring to a Bayesian degree of believe. With the present model set-up we cannot calculate probabilities because we really cannot span the full range of model and parameter uncertainty.

    In the end policy decisions are based on rough estimated of probability and political feasibility, so perhaps it is not that necessary to base policy decisions on probability calculations to the 5th decimal place.

    For example, the question to answer would be if society should take into account costs to raise levees for the next decades or not.

    Other expansion are public pensions. The future costs can be calculated with much narrower certainty, but nevertheless societies do what is feasible and many public pensions schemes are unfunded

    ReplyDelete
  26. @ 21

    Steve,

    It turns out to be quite difficult. Some years ago I posted regularly comments in climateaudit but finally I was put off by many commenters. While some may have a genuine open stance to engage in a debate, for many it is just venting predetermined opinions.
    It is unfortunately widespread in many blogs, so the more should we try to keep this one civilized

    ReplyDelete
  27. @ 19
    TCO, thanks for the book suggestion!

    You were wondering if something more phenomenological would work, Perhaps more in the direction of Landau theory instead of the computing intensive renormalization group ?

    Something in the direction of this paper? http://coast.gkss.de/staff/zorita/paper.pdf

    ReplyDelete
  28. #24, Eduardo

    Just a comment on your contribution:

    "Corinna is of the opinion that initial conditions are important. This is also what Pielke Sr. thinks. I am not really qualified here, but my guess is that for decadal predictions they are clearly relevant, but I am not convinced that for climate change with a time horizon of 100 years they are really that important. In my opinion at this timescales the important things happen in the atmosphere, but my view may be partial. I think we lack the computing power to answer that question now."

    It has been shown that the initialisation is important for the decadal predictions (hence the decadal time scale, the references from post 7). For those simulations the SST or the upper ocean mixed layer has been initialised.
    I feel it is very likely that changes in the initialisation of the deeper ocean will impact on the longer prediction time scales, this because the heat stored in the ocean is huge due to its larger mass and large heat capacity and has a large potential to change the atmospheric temperature. The typical timescales of the ocean are very long O(100-1000 years), but sooner or later any deep water anomaly will come in contact with the atmosphere and impact the warming there.

    ReplyDelete
  29. #25, Eduardo

    I have not really understood were the probability numbers in the IPCC report are coming from, or similar numbers like those given in your example.
    I have the feeling they are purely subjective, a gut feeling so to say. And I have doubts that in case we both get asked, that we´ll come up with similar numbers only differing by the 5th decimal number; I guess our probability assessments might differ by one order of magnitude.
    Is this an unwarranted assessment, just resulting from the fact that I have not understood the ratio behind the probability estimates?

    ReplyDelete
  30. Eduardo: your comments are the treats of this blog. Including when they are buried (not headposts). I remember you engaging at CA. I agree about the hoi polloi and further thing the proprieter is amiss himself.

    For instance, he spent huge amounts of ink years ago blathering about "bad apples" and challenged you with them. And you simply said "I don't know what a 'bad apple' means mathematically so can't even engage". Here he was getting a real working scientist (you) engaging...and raising an interesting issue (the need to define the term)...and he had ZERO INTEREST in doing so. So, while he was happy to pen many posts about bad apples, he was not interested in REALLY learning how different data interact with different methods.

    Burger (of Burger and Cubasch, which was a beutiful full factorial, that was much easier to understand method choice impact, than McI's wandering scribblings and non-controlled experiments (changing two factors at once and evaluating impact as if from one)) had a similar experience. He is completely (more than!) worthy of engagement and Steve would blow off his requests for method details, to pen more screeds for the hoi polloi.

    And Steve is really the best of them. Watts is dishonest AND stupid.

    I remember spending a several hundred post thread where I actually brought in Jolliffe(!) as an authority to show JeffId that his comments about negative thermometers were not true. It was painful, how hard it was to get someone of them to admit a math point. They really are so deep into "internet debate" and "sides" that they don't even really think like curious mathematicians or physicists.

    ReplyDelete
  31. Dear Eduardo,

    Thank you for this refreshing post and the following excellent discussion. As a layperson, I cannot say how grateful I am to the moderators of this site to finally find a blog site where the quality of discussion is always kept so high.

    I would like to ask a question about something you said in #24.

    You wrote, "In attribution studies, where the observed climate change trends are compared with what is expected from model simulations, the variability in the simulated target patterns (obtained from the simulations with different initial conditions is also taken into account. This variability is rather small, i.e. each model produces a fairly consistent climate change signal with some small variations. The reason is that the external forcing by year 2100 caused by CO2 is extremely large, and this without any feed-backs."

    It is the first time I have ever heard it said that the expected warming at 2100 -- *even without feedbacks* -- would be large. Usually, I hear it said, by skeptics such as Lindzen, Spencer, Motl, but I think by others too, that we would expect about 1.2 C increase in temperature from a doubling of CO2, and I think they assume that the doubling of CO2 would occur around 2100. It was also my understanding that the "no feedback case" was fairly simple and uncontroversial, from the point of view of the physics.

    Have I misunderstood something?

    Many thanks in advance,
    Alex Harvey
    Sydney, Australia

    ReplyDelete
  32. @Alex Harvey: I think you mix up forcing and temperature here. The more CO2 concentration rises, the stronger the it's absolute influence on climate becomes, even if each unit of CO2 has less influence than the one before.

    So if we do the the "drill baby, drill" and we find enough oil and coal to burn, 560ppm lead to 1.2C increase and 1120ppm lead to 2.4C increase. Without feedbacks.

    ReplyDelete
  33. @ 31
    Dear Alex,

    I think you are confusing forcing and temperature response. It can be a bit difficult for layperson to keep these concepts apart, as even in many blogs you read things that are not completely correct.

    The feedback is not the amount of 'additional temperature' change. It is the change in forcing (in watts per square meter) produced by a temperature change. So both are even measured in different units: temperature change is measured in degrees and the feedback is measured in watts per square meter per degree of temperature change.

    What I was referring to in my post is the forcing. The forcing due to a doubling of CO2 is about 4 watts per square meter. Climate models translate this forcing to a temperature response taking into account the original forcing and some feedbacks. Björn was asking what would be the differences in the simulated temperature change when the models are started from different initial conditions. My answer was that this variability is small, i.e. the influence of the initial conditions is smaller than the influence of the original forcing, because this original forcing is really strong.

    I didnt say that the non-feedback warming is strong. Indeed it is simpler to calculate than the with-feedbacks case, roughly 1 degree. I said though that the forcing caused by a doubling of CO2 is larger than the variations in natural forcings in the past millennium.

    ReplyDelete
  34. @ 28
    Dear Corinna,

    I agree that for decadal prediction the initial conditions are important, For longer timescales, say 100 years, I would be more skeptical, although I am not aware of any systematic analysis, so this is just more my hunch than my substantiated answer. I would believe that the conditions in the deep ocean would show up for longer -term predictions. If the timescales of the deep ocean are several centuries to one millennium, then these conditions would be important for predictions at those time scales.

    ReplyDelete
  35. @ 39

    Corinna, I do not know either, and I have heard accounts that the uncertainty ranges in the projections for the global mean temperature are a kind of guess-estimate. I am not defending the IPCC procedures here. I was envisaging a procedure for the interaction between policy and science. Your opinion seems to be that science has yet no certainty to offer to policy makers and in many cases it is true, especially at regional scales and for some variables like precipitation. For others I think is is indeed possible to offer qualified projections for the next few decades, for instance sea-level. Particular for sea-level there seems to be real requirements of advice . Scientist should however also have the courage to say that many things cannot be predicted.

    ReplyDelete
  36. Dear Eduardo,

    you raised indeed an important question on how valid results from simulations of numerical models can be considered, if these numerical algorithms
    + are run only once (and not several times in order to create some statistical information on the solution space)
    + use only a fraction of natural factors that have an impact on Earth's climate.
    + have not been checked systematically checked for stability.

    These problems are not uncommon in other fields of "numerical" physics, however, and when I still was working in science, they were handled in several ways:
    + The integration algorithms were using different time intervals and the results were compared
    + Noise was added (usually in form of a temperature) in order to ensure ergodic behaviour
    + A system's time average was compared to the results of ten time a simulation of time 1/10.
    + Different algorithms that simulate the same set of equations were compared
    + It was studied carefully how long is the relaxation time (infinite at T=0 and around phase transitions) before using the "equilibrium" state for measurements

    Corinna is right when she says that the error is certainly not in the fifth digit behind the comma. Actually, I would be quite grateful to accept your proposal to produce the climate signal pattern from five simulations of a 'good' model - you choose what that is.

    What makes me most cautious when seeing results from climate simulations is that they are still not able to reproduce longer meteorological cycles such as the PDO. (Please correct me if I am wrong!) This comes with no surprise. You told me once that climate models do not model the surface of the Earth in sufficient detail, but use average elevation over large areas of 30 x 30 km at least.

    {In this context one question. Some people say that CO2 should be a cooling gas at least in the stratosphere. Do you actually see in the simulations that the lower layers of the atmosphere get warmer under CO2 doubling and the higher layers get colder?}

    But back to the thread on models and their interpretation. I don't share your objection that it is irrelevant to reproduce measured climate when asking for the response on, say, CO2 doubling. As long as natural variability and feedback cycles are not understood to a degree that simulations can reproduce past climate with reasonable quality, scientists should apply more diligence in interpreting simulation results.

    ReplyDelete
  37. Does the Irreducibility of Uncertainty, provide evidence of the certainty of Irreducibility with all its random consequences ?

    Maksimovich

    ReplyDelete
  38. I try to answer the questions by Bjorn in #20 and part of #35. (I am not a modeler or theoretician myself, though I participate in a modeling project. So my explanation is not authoritative.)

    The climate system seems to be "chaotic"(*). We often take Lorenz' model as a typical example of chaos. It is a forced-dissipative(*) system rather than conserved(*) system. In the long run, the state of the system in its "phase space"(*) is likely to be near its "attractor"(*). And the attractor is bounded(*) though it is indefinitely complex.
    [The words marked with (*) are technical terms of the theory of dynamical systems, a part of theoretical physics or applied mathematics.]

    So, when we look at the system in a sufficiently long time scale, we expect that the state of the system fluctuates within a certain domain (around the attractor).

    (cf. We cannot predict the precise locations of runners of a 10000 meter race in a 400 meter track. But we are sure that they are in the stadium.)

    In a non-linear system, in principle, phenomena at different time scales have mutual interaction. But we expect, based on experience about turbulence, that the short-time-scale phenomena affect the long-time-scale phenomena only via their simple statistics such as averages and covariances.

    Weather is likely to be chaotic, but it can be regarded as noise when we think about seasonal (3-month) averages. El Nino cycles are likely to be chaotic, but they can be regarded as noise when we think about slow changes of decadal avarages.

    The difficult issue we have is multi-decadal variability, parts of which are called PDO (Pacific Decadal Oscillation), AMO (Atlantic Multi-decadal Oscillation), etc. Climate projections do exhibit something like PDO, but its phase appears random. New studies suggest that there is some predictability of PDO phase when we have good initial condition about the state of the ocean. But we cannot predict the phase of PDO beyond several years. The situation looks like that of weather beyond two weeks.

    We expect that PDO can be regarded as noise when we look at the trend of global warming. The expectation is not guaranteed, however.

    ReplyDelete
  39. I havent read all comments, so might be this was already said.

    1) The figure shows the average of all models. Some are better, some are worse. If you would take the best three, the picture improves a lot.

    2) Typical climate change metrics as "climate sensitivity" are not liearly related to the error metric shown in this figure. That is a "bad" model might have the same or smaller climate senistivity than one of the good models. In other words improving this temperature errir does not change the climate predictions.

    3) The figure shows the root mean square. It's obvious that even though large areas of the ocean are in the 2-3° error range in the figure this does not mean that the models are 3° too warm or too cold.

    4) For the question how relevant is a certain type of error for the climate predictions one should compare with simpler models. Radiative-convective models with the correct (since forced) sea surface temperatures show similar climate sensitivities. So for everything which is global temperature change and probably sea level change as well these model problem are not really important. For the rain in spain however it might matter crucially what the temperature gradient in the subtropical Atlantic is.

    ReplyDelete
  40. @ 39
    Hola Georg,

    1) the upper panel shows the average of all models. One can pick the best few and the errors will be of course smaller. However, another question arises: with a sufficient large number of bad independent models, I could always select the few best that happen to look better than the others by chance. So there is a danger of overfitting.

    2) I agree. The question that I wanted to raise is however a bit different, namely why models differ if they are based on the same 'laws of physics'. Sometimes this argument is put forward to 'demonstrate' that climate models must be correct, because they are based on the laws of physics. In the introductory paragraph I wanted to show that there is a long way between the laws of physics and successful predictions. There are clearly better and worse models, why is this?

    3) Right, I would have rather written that the lower panel shows the typical size of the error magnitude, not of the error itself. Nevertheless, I found it worrying that even for the open ocean an individual model can be 2 or 3 degrees off (colder or warmer) in the annual temperature.

    4) Let us forget prediction for the moment, and focus on an objective model characteristic, for instance equilibrium climate sensitivity. Why models cover a wide range? I am not asking if the sensitivity is right or wrong, but being models based on the same principles, shouldnt they yield a more closer value? If they are not based on the same principles, and some of them are incomplete, how can we be sure that the 'best' model is also not missing something important?

    ReplyDelete
  41. Hola Edu

    basically we agree in each point. Some additional remarks.

    My impression was that some of the IPCC models are really really bad but included for political reasons (we want a russian model, a chinese model an ...). The bias in the quality (I think but I am not sure) is not linear.

    2) If one looks on Figure b for each individual model one would recognize immediately the respective problems such as trade winds, the exact position of up- and downwellig areas etc. The french model for example has huge differences (at least as large as in Figure b) by changing only the horizontal resolution.
    So it seems to me much harder to identify "the one and only reason" why the results are different between the different models (though the pphysics is basically the same) than to identify the reason why their climate change predictions (or climate sensitivity) is different.
    Apparently for the latter point people seem to agree on the shortwave behaviour of low level clouds. But even in this case it is not so clear if the "problem" of the low level clouds finally has to do with ocean mixing or gravity wave drag or whatever. That's the dilemma when analysing complex systems.

    To your point 4. My feeling is (but I might be wrong) the top ten of the climate models have a spread of about 1-1.5°C and that is at least not so extremely far away from uncertainties in observations (just as a short remark, there are non-trivial issues when using satellite temperatures which have to do with the daily subsurface temperature cycle in the ocean and skin layer cooling). In summary I am not sure if the remaining deviations between the better models and the obs are something real and important (possible) or just a mixture of remaining uncertainties in observations + some art of tuning the model parameters.

    ReplyDelete
  42. Ed: I'm comfortable with the concept that small scale (in time and space) variability will not have an effect on the larger changes. But what bothers me is USING the model that is designed around small scale stuff and just running it forward to try to understand the big stuff.

    For instance, random Brownian motion will give some variability of dust motes on a surface or the like. But we now that the gas laws or how an automobile engine work are irrelevant in terms of those small motions (they average out). But when we go to understand the macro scale, we don't just take some godawful direct collision model and speed it forward on steroids, with bandaaids to take care of resolution issues. Instead we make a change to whole new models, using statistical mechanics (already taking into effect the averaging...not brute force, molecule by molecule modeling). Or we can even use phenominalogically based gas laws that were around even BEFORE stat mech and Boltzman.

    I'm not really an expert, but I think this same thing occurrs in nuclear reactor dynamics. There are very different forms of modeling needed when the neutron flux is very tiny and chances of stray variation can be a serious issue. At higher fluxes, there are whole different modeling approaches that are used. NOT just taking the "source range" flux models and brute forcing them to higher levels.

    All this said...does not mean that I have any idea of what modeling approach to use for climate scale changes over a century. Or even if there IS a convenient different approach to use. But the lack of an alternative, does NOT make what we are using any more certain. It just makes it humanly understandable how people and funding gravitate to it. But it still might be shit.

    ReplyDelete
  43. Georg:
    >My impression was that some of the IPCC models are really really bad but included for political reasons (we want a russian model, a chinese model an ...). The bias in the quality (I think but I am not sure) is not linear.
    and..
    >4. My feeling is (but I might be wrong) the top ten of the climate models have a spread of about 1-1.5°C

    Easy to check: From my (just visual) analysis I found that, not surprisingly, the errors are smallest for the model mean and
    larger for almost all individual models (http://www.ipcc.ch/graphics/ar4-wg1/jpg/fig-8-3-sm-1d.jpg).
    I could not identfy top ten models with an error range about 1-1.5 C, may be you can?
    The smallest error margins in the model mean are from -5-+5, leaving out the hotspots,
    they are still between -4 to +2, with most areas clearly too cold for the present day climate.

    To me, the surface temperature error is worrying, taking into account that surface temperature is the tuning parameter of climate models.
    However, I found the errors in the ocean temperature even more worrying, since they provide in terms of heat content a much larger
    source of error than the few degree in surface temperature.
    But what worries me most are the identified errors in the earth radiation balance.
    They are really large (Figure 8.4 in IPCC AR4 chapter 8, RMS of about 10-35 W/m**2in short wave and 5-30 W/m**2 in longwave radiation), compared
    to the relative small radiative CO2 forcing of 4w/**2 we are talking about (again, all individual models are well above the model mean error).
    I believe this is really indicating that the models have substantial limitations, and I would not claim that they are at all suitable to be used for the purpose.
    Reading the IPCC report text about the radiation budget errors, I only can conclude, that the authors for some reasons, were not very critical when assessing the models (a tendency which I find also elsewhere in the report).

    One is of course free to formulate the minimum requirements which a model (or model type) has to pass before it can be considered as suitable to assess the impact of increased CO2 levels on the earth surface temperature.

    To start with, I would formulate the following minimum criteria: The models need to show that their error margins in modelling the earth radiation budget (and its variability) are smaller than the radiative forcing from the CO2. (Because that is what it is about: the earth radiation budget).

    Of course this is subjective and others might prefer other criteria. In any case: the IPCC criteria are not really clear (at least not to me) and it is far from transparent on what criteria the decision to use the models for assessments (and by the way to produce new research by the IPCC, the model setups and runs are not fulfilling the IPCC critera on published work) is based on.

    Just to clarify: I believe the climate models are impressive and the modellers have done an extremely good job: climate modelling is a very difficult task.
    I think further that the models, despite of their large error margins, are an important research tool and can resolve or contribute to resolve many different research questions.
    However, I don´t think that the models are currently qualified to be used as assessment tools for the particular anthropogenic CO2 problem. Letting the public rely on unpublished model
    runs (setups), is, I believe a big mistake in the IPCC concept and the biggest fault of Workgroup 1.

    ReplyDelete
  44. @Eduardo / Georg #39+40

    Ad 1)
    Not sure whether you both overlooked the most crucial point, at least in my opinion. "Good" and "bad" GCM were used with equal weights by the IPCC in the AR4 in order to predict climate sensitivity, altogether 23 models with different quality from useful to dreadful waste of time. However, the UNWEIGHTED AVERAGE climate prediction was used by the IPCC in its statement of what is current knowledge in climate sciences. When I first noticed, I got quite upset and thought this procedure is utterly un-scientific.

    Ad 2)
    Counter-example: A finite element simulation of a car crash tells you that a crash at 35 km/h does not produce any substantial effects on a certain car type. In the crash test of the corresponding prototype, the car gets fully destroyed, however, at 35 km/h. Would you like to drive that car at 80 km/h?

    The analogy is that a model should not be trusted to represent valid physics if it is not able to reproduce past climate.

    Ad 3)
    Georg, you are fully right. However, the individual model's backtesting results are displayed in the appendix 8 (?) of the AR4, so you could check where exactly a model produced too much warming and where too much cooling. I somewhat share Ed's concerns on open ocean climate backtesting errors.

    Ad 4)
    Georg: Are you sure that simpler models could produce more reliable results than complex ones? If CO2 absorbs heat and radiation, there should be a warming effect in the troposphere, a cooling effect in the stratosphere and an increase in (vertical) convection. I doubt whether simple models could help better in modeling vertical convection than sophisticated ones.

    ReplyDelete
  45. @Kooitu Masuda #38

    You mentioned that weather is chaotic, and I couldn't agree more. The question was, however, whether climate models reproduce chaotic behaviour, not whether the wheather is chaotic. As Anna pointed out recently, it is not the full Navier-Stokes equations but the simplified Reynolds equations that are used for climate models.

    Let us delve on the type of simplification: The Navier-Stokes equations describe a fluid that is subject to turbulence on each spatial scale. As a consequence, there are no mathematical and no numerical solution to the full Navier-Stokes-Equations, only approximations.

    Maybe one of the professional climate modelers could comment on what simplifications are actually made. (Unfortunately, I won't be able to do this on my own tonight.)

    My guess is that the simplifications have to do with cutting off small-scale turbulence. Why I believe so: this would greatly reduce computation complexity.

    If I am right, it means that exactly the necessary "noise" is thrown away that would otherwise ensure ergodicity, in other words, that "the short-time-scale phenomena affect the long-time-scale phenomena only via their simple statistics such as averages and covariances". In any case, I still think that we are in deep trouble when using models that have not been checked thoroughly for its mathematical behaviour.

    ReplyDelete
  46. @corinna

    Your point on errors over ocean is quite disturbishing but convincing.

    ReplyDelete
  47. @Corinna
    "I could not identfy top ten models with an error range about 1-1.5 C, may be you can?"
    I was focusing on ocean results.
    Yes we can.
    ECHAM5 looks nice, some others as well. The errors are quite interesting and often linked to subtropical low level clouds.

    Your criteria as quality control doenst make sense to me. The problems with these clouds above rapidly produce errors of a couple of W/m2. Though I cannt see in what respect exactly this problem affects the ensemble of feedback processes controlling finally the climate sensitivity.

    If I remember right just using different broad band radiation schemes is adding up to a couple of W/m2 (so just diefferent parameterisation of band width and shape and stuff like this). Only line-by-line models give nearly perfect corroboration.

    In general I dont think that one single metrik (like SSTs) is a good criteria for a quality control. It is more the connection between different processes and mechanisms.

    @Bjoern
    No, the results of simple models are not more reliable. What I wanted to say was that a simple model can obviously easily been tuned to give the right mean temperature. But it's basic physical behaviour (e.g. climate sensitivity) does not depend on having the mean temperature right.

    ReplyDelete
  48. @Bjorn (#45)

    I should first clarify distinction between actual climate models and analogical thinking I made using the concept of chaos.

    The actual climate model is based on Navier-Stokes equations. Roughly speaking, fluid motion smaller than the resolvable scale (an example for the part of atmosphere: horizontal grid interval 100 km, time step 15 minutes) are treated as something like diffusion. But fluid motion at the scales larger than this are computed explicitly.

    The representation of weather in climate models is "chaotic" in the sense that growth of small difference in initial condition is compatible with the chaos theory. But it is not sure whether concepts of the chaos theory such as "attractor" are really applicable here. The situation is more complex than the favorite targets of theorists of complex systems.

    I said that we think that only avarages and covariances of short-time-scale phenomena (weather, sometimes ENSO or PDO) matter to long-time-scale phenomena. This is not an explanation of how climate models work. This is explanation of how we interpret output of climate models.

    When we have two runs of the same model with slightly different initial conditions, the phase of weather phenomena will likely be different. We often do "ensemble experiments" containing many such runs. By taking averges of many runs, we expect that the effect of peculiarity of each realization is reduced, and that such statistical properties (not explicitly defined) of weather phenomena that are relevant to climate are better represented. This is not a rigorous theory but a working hypothesis.

    ReplyDelete
  49. Dear experts,

    In reading on the web about climate models, more than once I have come across the names of J. Scott Armstrong and K.C. Green. In my layman's eyes they make a lot of sense, esp. when talking about do's and don'ts in model building.
    However, in your messages on this excellent blog not once mention is made of the two of them. Can anyone of you inform me on the value of their ideas and views, of which a lot can be found on www.theclimatebet.com ? Thanks!

    ReplyDelete
  50. @ 42
    TCO,

    climate models historically stem from short-term weather prediction models, which is quite reasonable, I think. These models are complicated, so why start from scratch when there was already a tool that could be augmented for longer term climate projections. Later additional submodels were incorporated: ocean, sea-ice, vegetation, carbon cycle, etc.

    There are also a slight different type of climate models, so called Intermediate Complexity Models, that try to capture the basic elements of climate dynamics by-passing a detailed micro-representation of all processes, but I would say that in the end they are versions of general Circulation models which have been simplified, parametrizing any of the small-scale processes. Parametrizing means here to find a simple closure scheme for the processes that are not resolved explicitly, as in turbulence models.

    Your comments seem thoughtful. perhaps we need a new model paradigm for the climate system. One problem is that there is to my knowledge no holistic theory of climate, based in basic global conservations or maximization laws, like say entropy production or similar. To define such a model by-passing the description at microlevel would be quite difficult.

    Also, in the end we would face again the problem of validation. How can a climate model be tested and validated against observations? This applied to all type of climate models.

    -To reproduce the present mean climate is not enough, as the model can have been overfitted to reproduce the observations.
    -To compare with short term chaotic weather trajectories does not give any information, since weather is not predictable after a few days, and so models and observations can perfectly provide different answers even if the models are correct.
    -To compare with longer (centennial) evolutions, which would be driven by external forcings instead of being chaotic is not possible either because we dont have the observations.

    Ideas are welcome..

    ReplyDelete
  51. @ 43,47

    Corinna and Goerg,

    I would tend to agree with Corinna here that the simulation of the energy balance is really important. The behavior of clouds is very determinant of the climate sensitivity. If and how cloud cover changes with temperature is one of the big unknowns and it would be reassuring that climate models could reproduce the mean cloud cover and its variability.

    ReplyDelete
  52. @44
    Dear Björn,

    1) yes, so far all models are weighted equally to estimate a range of possible temperature responses in the standard IPCC figures. Actually the calculation of these ranges is not very formal: a multiplication of the simulated ranges by all models by a factor. This should represent the limited model sampling, i.e. just 20 , instead of a full model sample covering all sources of uncertainty. It is known that this approach is not optimal, so hopefully the next IPCC report will improve on this.

    ReplyDelete
  53. @ 45
    Dear Björn,

    perhaps you may want to have a look at this paper on ocean modelling.
    http://coast.gkss.de/staff/zorita/holloway_SurvGeophys.pdf

    To my knowledge the dynamical properties of climate models have been seldom studied from this theoretical point of view

    ReplyDelete
  54. #49, Amateur

    I guess you refer to their paper

    GLOBAL WARMING: FORECASTS BY SCIENTISTS VERSUS SCIENTIFIC FORECASTS*
    Kesten C. Green and J. Scott Armstrong

    which can be downloaded (as well as another paper, which I have not read)
    http://www.forecastingprinciples.com/index.php?option=com_content&task=view&id=26&Itemid=129/WarmAudit31.pdf

    I came across this paper about a year ago and found it also very convincing. It seems that they are experts on research about forecasting and have developed standard principles and procedures to follow for expert forecasting. After reading their paper I had the strong feeling that it is very need for the responsible persons organizing the IPCC process to study their work further and get in contact with experts on forecast research to organize the assessment more professional: We natural scientist tend to be a bit naive in social science, management, policy and decision making and overestimate our ability to be objective generally strongly.
    However, this is not really my field and I felt not qualified to introduce their work here. May be we should discuss this in a separate thread?

    ReplyDelete
  55. #Kooiti MASUDA said,48 & Bjorn (#45)

    To add: The climate models are based on the Reynolds equations, these are develeoped from the Navier-Stokes equations by a perturbation approach, splitting up every variable into a mean and a turbulent disturbance. The resulting equations are equations for the mean (typically interpreted as a mean related to grid resolution, e.g averaged over 200km). These equations include quadratic perturbation terms, which do not vanish after averaging (eg. u´v´). Strictly speaking, these are additional unknowns for which we do not have equations (we can develop some, but they include new unknowns, higher order perturbations ...). This is the turbulence closure problem, which is solved by parameterisation of the turbulent perturbations: we relate them e.g. to the shear in the mean field. We have many different methods in use to parameterise turbulence and the models are actually quite sensitive to the choice of the parameterisation. To find out what actually is done in terms of turbulence parameterisation in the IPCC models is actually quite difficult, there is no documentation of the actual setups in the IPCC report and the runs are unpublished (some might have been published afterwords). However, to my understanding the turbulence parameterisations used in IPCC models are actually extremely simple, but may be others which were directly involved in the IPCC model runs can answer this better.

    The smallest scale which can be resolved by a numerical model is 2-times the grid resolution, hence a model with 100km resolution must parameterize all phenomena smaller than 200km.

    ReplyDelete
  56. Hola Edu
    I havent checked the paper but ONLY the differences between different broadband radiation schemes are larger than 2*CO2 radiative forcing. Since this was Corinna's criterium we are done with climate modelling independent of problems with cloud modelling.

    "The behavior of clouds is very determinant of the climate sensitivity. "

    It is the most important single uncertainty but it doesnt determine climate sensitivity. Again, several W/m2 of differences in energy balance are due alone to the problems with subtropical low level clouds. Though I certainly wished that the models improve there as I wish that there improve in sea ice thickness, representation of Eckman pumping or precipitation in the Indian Monsoon I cannt see why these clouds are now the key in climate modelling.

    ReplyDelete
  57. Dear Kooitu,

    I liked your sentence "The situation is more complex than the favorite targets of theorists of complex systems." It is for certain that my concerns are of more theoretical nature and it is always the challenge to get the theories right in a practical case. I'll try to read the paper Eduardo was suggesting over the week-end.

    Summary observations on climate models, please correct me if I am wrong:
    + Climate models using Reynolds equations are non-linear equations up to second order, i.e. quadratic terms.
    + Time steps of integration vary between 15 min. (Kooitu) and 6h (AR4), but are not stable against changes in integration methods and parameters (Eduardo, Corinna).
    + Climate models can not represent orographic details in oceans and mountains.
    + Climate models so far can not reproduce long-term periodical climatic patterns such as the PDO. (I believe this is related with the issue above!)
    + Climate model prediction capability is not essential in order to compute climate sensitivity (Georg).
    + Climate models so far have not displayed non-ergodic behaviour. (Some mathematicians have argued that while a system with ~20 observables may show chaotic behaviour, a system with 200.000 observables might organize itself in predictive, large-scale patterns. Since I have never heard of that idea in the past ten years, I believe that it was a dead end.)
    + Relaxation times of climate models are chosen of the order of few years or decades, hence below some of the periodical climate patterns.

    I would like a new thread to be started with a more thorough discussion of Georgs (#56) claim, but this is not the right place here.

    ReplyDelete
  58. Bjorn (#57)

    > + Climate models using Reynolds equations are non-linear equations up to second order, i.e. quadratic terms.

    It is not simple. In atmospheric models, we must somehow incorporate the effect of clouds, and various attempts are made. Such sub-grid scale phenomena that we regard as turbulence are expressed as Reynolds terms in the equation of large-scale motion. There are models at various levels of closure, and the second order is one of them.

    >+ Time steps of integration vary between 15 min. (Kooitu) and 6h (AR4), but are not stable against changes in integration methods and parameters (Eduardo, Corinna).

    The value of time step is determined by the necessity of computational stability. It is different case by case. I said 15 min. as an example compatible with 100 km horizontal grid interval. Wind speed of 100 m/s is possible, so the time step must be less than 1000 seconds to achieve computational stability.

    We do not usually output numerical data at all time steps, but monthly or daily statistics. For the coordinated experiments related to IPCC AR4, participants agreed to make output files at the common time intervals, and the shortest of them was 6 hours. For the AR5-related experiments, 3 hours.

    >+ Climate models can not represent orographic details in oceans and mountains.

    It can represent coarse-grained orography larger than the grid interval. (In terms of wave length, twice the grid interval, as corinna mentioned.) Effects in smaller scales may be represented like Raynolds stress, but it is a crude approximation.

    >+ Climate models so far can not reproduce long-term periodical climatic patterns such as the PDO. (I believe this is related with the issue above!)

    I think, and I said, that they do reproduce PDO. But the phase of the oscillation is usually random (This is a rough expression, technically it should be rather called "chaotic"), so the reproduction is not useful as prediction. But, in cases where we can give a very good initial condition of the state of the ocean, my colleagues find predictability up to several years.

    T. Mochizuki et al., 2010: Pacific decadal oscillation hindcasts relevant to near-term climate prediction. PNAS, 107, 1833 - 1837.
    http://www.pnas.org/content/107/5/1833

    The rest is difficult for me. I think that whether the climate model is ergodic or not has theoretical significance only. The equally difficult but more practically relevant problem is whether we can separate short- and long- time scale phenomena and can cosider the short-term phenomena like Reynolds stress to the long-term phenomena.

    Ko-1 M. (Kooiti Masuda)

    ReplyDelete
  59. Eduardo 12
    You state historical levels of CO2 have never been higher. What do you make of the statements by Lindzen and Dowlatibadi here?
    Is this a similar problem as with historical temperature records?

    ReplyDelete
  60. @ 59

    this is a 1 hour video clip! it will take me some time to answer your question, but perhaps you can phrase it here for our readers as well?

    ReplyDelete
  61. Björn would like to see how different are simulations started with with different initial conditions but otherwise driven by the same greenhouse gas forcing. Here you an find the climate change signal simulated by the MPI model under scenario A2. This is one of the most pessimistic scenarios, so the signal-to-noise ratio is large. By noise I mean the variability induced by the different initial conditions

    ReplyDelete
  62. Eduardo 60

    Sorry, I know it is a long video, and I did not watch all of it. At some point one of the two makes the assertion that CO2 levels have been historically much higher than they are today. I think they refer to a timespan of millions of years though.

    ReplyDelete
  63. @ 62

    Reiner, I will watch the video with a bit more time.
    In geological times, the atmospheric CO2 concentrations has been higher than today. thirty millions years ago, it was roughly 1500 ppm (parts per million), declining to about 400 ppm for the last few millions of years. In the past million years it has oscillated between the glacial value of 180 ppm and the interglacial value of 280 ppm. The glacial-interglacial transitions take place over a few thousands of years. Today the increase from 280 ppm to 380 ppm has occurred in just 100 years

    ReplyDelete
  64. Commenters that may want to have more closer look into climate models may find this review by Nanne Weber interesting

    ReplyDelete
  65. @Eduardo:
    Your link had an inadvertent %22 at the end, giving a dead link. Here is the functional link:
    http://wires.wiley.com/WileyCDA/WiresArticle/wisId-WCC24.html

    ReplyDelete
  66. Thanks, Marco. Very instructive, indeed, to look at source data. Quite remarkable are the artefact in Simulation 1 at the lower right corner of the picture, that does not show up in the other Simulations. At closer inspection, there are several regions where the simulations differ by 3°C, although the average tends to be close together.

    Are there more examples on the net like this?

    Eduardo: I have read the Greg Holloway paper on ocean modeling by two thirds and i am quite confused. He is looking for an ocean modeling where the entropy does not decrease over time, except external drivers. Is this the right concept? I mean, can you externalize the drivers in an open system to such an extent that they are not considered as a part of the system?

    Kooitu: Thanks for the very valuable information on climate models. Unfortunately, I can see only the abstract in the PNAS paper. Eduardo has my eMail address, in case you would be ready to send it to me in full text.

    I will come back to the last three bullet points but need some sleep now.

    ReplyDelete
  67. @ 66
    Björn,

    I am not sure whether I understood properly your question about external drivers. The external drivers, what we call forcings, are prescribed in the simulations, they are not calculated interactively by the model. These usually are the solar irradiance, atmospheric concentrations of greenhouse gases, volcanic eruptions or land use. However, the external drivers may depend on the model use. For instance, a climate model that includes a carbon cycle model would accept as external driver the prescribed anthropogenic emissions of CO2. The CO2 would be then distributed in the climate system (atmosphere, ocean, biosphere) interactively, depending on the temperature, precipitation, ocean dynamics, primary production by the biosphere, etc.

    The simulation plots were prepared by myself. You can access the data here. They are written in netcdf format, a format quite usual for large climatic data sets. One would need to spend a certain amount of time to get acquainted with the software to handle those data. There are several public software packages.
    Other than that last IPCC Report, chapter 10 contains a good collection of plots of climate simulations

    ReplyDelete
  68. @ 56

    Role of clouds in climate sensitivity

    Perhaps I'd rather written that the behavior of clouds mostly determines the difference between models. Other feedbacks, like water vapor and lapse rate, seem to be more in agreement throughout the models.
    This is an interesting paper. Figure 1 there shows that the response of cloud cover simulated by two models can be opposite. These two models display almost the extremes in climate sensitivity (smallest and largest), in the previous IPCC Report 2001

    ReplyDelete