What would be the consequences for the estimations of future climate change if the reconstructions of the climate of the past few millennia were wrong? Since estimations of future climate change are presently solely based on model simulations, they would not need to be modified. However, past reconstructions do have a subtle and, for many perhaps surprising, implications in the understanding of global climate, and in this sense they also project into the future.
However, reconstructions of past climates, in particular the climate of the past millennium, do interact in a subtle way with the understanding of the basic functioning of the Earths climate.
The key word for this link is the much vaunted concept of climate sensitivity and its connection to the amplitude and phase of past climate variations. Climate variations can basically be classified as externally forced and internally generated. The externally forced variations are due to variations in external agents that may affect climate, .i.e. solar variations, concentrations of greenhouse gases, volcanic eruptions, anthropogenic land-use changes, etc. Roughly speaking the amplitude of external forced variations of surface temperature are proportional to the amplitude of the variations of the external climate forcing, the proportionality constant being the climate sensitivity. It is plausible, although by no means proven, that the climate sensitivity is independent of the nature of the external forcing (advocates of the solar influence on climate would rather defend, rightly or wrongly, the concept of forcing-dependent sensitivity). The timing of the externally-forced variations must be also related to the variations in the external forcing, although perhaps they are not necessarily simultaneous. For instance, if solar irradiance reaches a maximum in some particular decade, the global mean temperature would also reach a maximum after some lag, provided that all other external forcings remaining constant.
By contrast, internal climate variations are not related to the external forcing. Their amplitude and timing is random, and their spatial structure is determined by the physical processes that define the climate system. Known examples of internal variability which vary with time scales of a few years are the North Atlantic Oscillation, ENSO, etc . Also, the slowdown of the global temperatures observed in the past decade may be the result of natural internal variations superimposed to an underlaying rising trend However, internal variations with typical timescales of several decades or even longer may well exist, although they are not really so well characterized as the high-frequency internal variations. One example of these slow internally generated climate variations is the Atlantic Multidecadal Oscillation. Multidedadal and centennial internal variations may display other unknown spatial structures.
Internal variations are not caused by external forcings, but there could be a subtle connection between them and the climate sensitivity. This connection is based on the fluctuation-dissipation theorem. This is not the place to explain this theorem and its applicability to the climate system, which is also debated. But the meaning of this theorem can be intuitively illustrated. If the climate system is a stiff system. i.e. it exerts a large resistance to external influences (low sensitivity), then the very same processes that are responsible for this stiffness will tend to quickly wipe out any random fluctuations that may be internally generated. On the other hand, if the system is soft and it does not try to resist the effects of the external forcings (high sensitivity), any random fluctuations will tend to persist for longer times.
Thus, we have therefore different 'scenarios' to describe past temperature variations:
-a very small climate variability, with temperature variations more or less following the external forcing. This implies a low-climate sensitivity or very small variations in the external forcing in the past. This is the hockey-stick scenario. If the amplitude of past solar variations had not been that small, the hockey-stick scenario would imply low-climate sensitivity, and therefore also small climate changes in the future. This explains why the recent estimations of past solar variations pointing to very small amplitude were warmly and rapidly welcomed in some quarters.
-a large past variability, as for instance hinted by other temperature reconstructions (Moberg et al. Esper et al.). The larger variability would require larger climate sensitivity or larger variations in the external forcing. Although one would think that indications of a larger climate sensitivity could find an easier way into the IPCC quarters, it raises another problem: the role of solar forcing in the temperature trend observed in the last two centuries or so would have been also larger, and therefore the influence of greenhouse gases should should have been smaller to accommodate the observed the temperature increase.
An intriguing scenario, which I am not sure it would be possible at all, is to have large internal climate variations with small climate sensitivity. For this scenario to be correct, the temperature variations should be uncorrelated with the external forcing. In essence, this would be one of the alternative 'theories' to explain the 20th century warming. For this theory to be true the Medieval Warm Period could have been a prominent feature but the solar irradiance should not be very high, otherwise a warm MWP could be easily explained by the standard paradigm of climate sensitivity. I must admit that I have several problems with the idea of an important role of past internal variations. Does the fluctuation-dissipation theorem applied to an open system require that high-sensitivity goes hand-in-hand with high amplitude internal variations as well, or only with long-lived internal variations? Perhaps some readers, with more knowledge than me, may have some comments to the applicability of the fluctuation-dissipation theorem in this context.
An interesting question is related to our pseudo-proxies studies (von Storch et el 04). Given that the main conclusion of that study was that past climate variations had been underestimated in the proxy-based reconstructions, and that they were probably externally forced (at least this is what climate simulations indicate), these results hinted at a high climate sensitivity. Why then were Mann, Jones and Rahmstorf so fiercely opposed to this and other similar studies?
Eduardo,
ReplyDeleteFor me, as an outsider, what you are saying about climate models in the beginning of the post is pretty astonishing. How are models validated then? What criteria serve to compare them?
In the world that I know, a model makes predictions, specific experiments are done to test the predictions, then the model is tweaked. On top of that - and almost always when the direct experiments are not possible (e.g. epidemiology) - every effort is made to see how the model's output compares to the past knowledge.
Obviously, the experiments on planetary climate are not possible and we don't even have time to see if the predictions come true. And yet no effort is/was made to see how the past could
have been predicted by the naive model? Wow. Doesn't it mean that, effectively, every model is just as good as any other one?
@Nanonymous
ReplyDeleteThe models are calibrated on the last 30 years, not the last and uncertain 1000.
@Edu
Nice article.
One should add that many people think the last 1000 years are not really suited for these kind of questions in any case. External forcing is way too small and the models to uncertain to make a proper distinction between internal variability vs external forcing (in each model you can, but it does not solve the question in nature).
These people prefer at least a scale of glacial interglacial changes to make statements on climate sensitivity (Annan&Hargreaves etc).
Feliz navidad
@ anonymous
ReplyDeleteModels are validated against observations in the last, say, 100 years, as Georg wrote. However, the story is slightly more complex. As the aerosol forcing in the 20th century is uncertain, modellers have some leeway to tune the model simulations to match the observations (Kiehl, GRL 2007)
@ Georg
The work by Annan and Hargreaves is quite interesting. The glacial and interglacial states might be too different to learn about the present climate sensitivity, but the past millennium has indeed some drawbacks as well. Several avenues are needed with no guarantee of success
Eduardo
ReplyDeleteYour comments about the prediction of future climate based on models would be acceptable if predictions of present day climate models were independent of the many empirical “constants” in the models which are tuned to best reproduce observational data.At least, some of these “constants” vary according to the time period chosen to have the best fit, which mean that they are not “constant” in the physical sense( i.e. like the speed of light). The famous 1988 prediction of climate catastrophe before 2008 of James Hansen used the so called constants based on past observations. Those predictions failed. With new “constants” they failed again the prediction of post 1998 temperatures. What this means, is that either the models miss essential physical mechanisms, or the simplifications introduced in the basic equations to allow their numerical computation with present day computers were inappropriate.
J.D.Domingos
@Jose - I do not know the "The famous 1988 prediction of climate catastrophe before 2008 of James Hansen" - can you give a source where to find that prediction? I ónly know that he claimed that AGW would be the cause of the extraordinary heat in summer 1988 in the US.
ReplyDeleteHans
ReplyDeleteThe source is the writen testimony of J.Christy, 25 February 2009 to House Ways and Means :
http://waysandmeans.house.gov/media/pdf/111/ctest.pdf
I used it in my web page (sorry, most of it in Portuguese) at
http://jddomingos.ist.utl.pt
@Georg:
ReplyDeleteThe models are calibrated on the last 30 years, not the last and uncertain 1000
Sorry, this is so deeply unsatisfying that I am completely shocked than anyone ever took any of these models seriously.
30 years is too short a time on a scale of planetary climate. This is like modeling athletes' performance in 10,000 meters run based on a bunch of known physical parameters before the run and their state at one given 0.5 second during it. It's a task of about the same complexity, and if only that little of all possible dynamic behavior is captured, it is similarly meaningless.
One contribution to climate model validation:
ReplyDeleteHave a look to Richard Lindzens paper in GRL:
http://www.agu.org/pubs/crossref/2009/2009GL039628.shtml
He investigated the climate sensitivity for the IPCC models and for the actual climate system.
Amazing results based on ERBE data, showing that the climate sensitivity of the
actual climate system is only about 0.5 C, whereby it is for the IPCC models 1,5-5 degree.
I can´t see the way round to reject his results, his findings are based on the data. The
models simply do not compare well for this very essential process.
He discussed this in a presentation:
http://wattsupwiththat.com/2009/10/27/lindzen-deconstructing-global-warming/
In addition to the model validation as summarized in AR4, which was already worrying,
this is enough to step back from the concept to use these models for future climate projections.
The feedbacks in the climate models actually act opposite to those in the natural system.
Corinna
Spencer on the Lindzen and Choi paper:
ReplyDeletehttp://www.drroyspencer.com/2009/11/some-comments-on-the-lindzen-and-choi-2009-feedback-study/.
And Judith Curry has sent a comment on the same paper to Climate Audit. I hope it will appear soon.
http://climateaudit.org/2009/12/23/von-storch-wsj-editorial/#comment-212451.
Dear Eduardo, I am among the few skeptics who do think that climate models - as a matter of principle - are a legitimate tool that is bound to be useful sometime in the future, and I disagree with a large part of the aprioristic criticism directed against them.
ReplyDeleteHowever, I would agree with Nanonymous or others that the approach to verification - or non-verification - of the climate models that you presented means that this methodology is not scientific. It is at most science-inspired.
But science must always test hypotheses against all sufficiently relevant evidence. Of course, the answer to the question "how do we recognize which models are good and which are bad" is that the people similar to the "typical" IPCC members are indeed failing to do so.
The power of models is not determined by the success in their passing of the tests, but by the ability of their creators to copy them elsewhere and force many other people to use them. This spontaneous sociological method to achieve "consensus" - which is de facto isolated from any empirical evidence - is clearly no science. Moreover, they are open that they take "ensembles" of different models - as if they were "friends" to each other. Competing models - or theories - can never be "friends" in science. Science is a permanent confrontation. It's a permanent battle for the theories and models to survive, and only at most one among non-equivalent candidate theories may survive. If it is not a battle, it is not science.
Also, while the reconstruction of 20th century data could be relevant, it is far too limited and noisy a body of evidence to test the models. It's very clear that many events in the distant past may provide us with much sharper information about the validity of various assumptions, parameterizations, and values of parameters in the models than some boring and confusing 20th century events.
Even if a paleoclimatological event is incompletely known, it may be so special that it may constrain models much more cleanly than some foggy 20th century events with an unclear attribution. In fact, I think that most existing models would be immediately falsified - "qualitatively" - if they were compared with some obvious facts about the Earth's history, including its relative stability and the known estimates of the temperature fluctuations at different timescales and indirect data known from biology, archeology etc.
...
... These are the things that matter. The climate models must be able to pass the tests of known facts about the Earth in the last 30, 100, 500, 2000, 20,000, 650,000, 10 million, 500 million, and 5 billion years. These different timescales represent different kinds of phenomena and conditions but the models must be consistent with all of them. They must describe not only some chaotic curves with some vague trends but also the right color of the noise, reasonable power laws for the autocorrelation, correct depedence of the autocorrelation on spatial and temporal distances, the ability of the known fossils to live in the environment where they apparently lived, and so on.
ReplyDeleteIt's just an extremely primitive, bad, and unreliable form of science if someone deliberately takes a very narrow-minded set of conceivable tests that the modest should pass with a certain accuracy. A real scientist must always try to think about the most diverse possible spectrum of tests. One reliable test of any kind is enough to falsify a hypothesis - or a model - and that's one of the primary procedures that scientists should do.
What you write contains interesting seeds of ideas but it also de facto implies that what the climate modelers have been doing in the recent decades is not worth a penny - despite the tens of billions of dollars spent for this discipline - and that proper scientists would effectively have to start from the scratch, anyway, if they wanted to do this thing scientifically.
The existing climate models are inevitably conglomerates of tumors where every component is more likely to be wrong than right because they haven't been tested by a sufficient number of tests to verify them.
It's not enough for a model to use scientific terminology or equations that are similar to those in physics textbooks. There can still be lots of mistakes, redundancies, omissions, overestimates, and underestimates in their combination. If these things are not being tested against the maximum diversity of available data - and we have many kinds of data, as everyone who is able to look at the world from many (especially mathematical) angles realizes - then the result has a very limited scientific value if any.
Best wishes
Lubos
@ 10,11
ReplyDeleteDear Lubos,
I understand your criticism, but my post was not about validating climate models. I first described a fact: models and proxies are mostly unconnected so far, so that revisons in climate reconstructions do not affect climate projections, for the worse or for the better.
Then I discussed how reconstructions may shed light about some properties of the climate system. I fail to see the relationship to model validation.
@10,11,12, LUbos, Edu: - But this may be a good opportunity to deal with the issue of "validation" of models in a future discussion.
ReplyDeleteBy the way, Peter Müller and I have dealt quite a bit with this issue in our 2004-book:
Müller, P., and H. von Storch, 2004: Computer Modelling in Atmospheric and Oceanic Sciences - Building Knowledge. Springer Verlag Berlin - Heidelberg - New York, 304pp, ISN 1437-028X
Please understand that the usage of the term "model" and the generation of added knowledge is rather different in different epistemic communities.